Test Report: KVM_Linux_crio 19476

                    
                      5d2be5ad06c5c8c1678cb56a2620c3837d13735d:2024-08-19:35852
                    
                

Test fail (11/208)

x
+
TestAddons/Setup (2400.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-479471 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-479471 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.950688402s)

                                                
                                                
-- stdout --
	* [addons-479471] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-479471" primary control-plane node in "addons-479471" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image docker.io/registry:2.8.3
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image docker.io/busybox:stable
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	* Verifying registry addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-479471 service yakd-dashboard -n yakd-dashboard
	
	* Verifying ingress addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-479471 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:45:21.382853  107271 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:45:21.383011  107271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:21.383024  107271 out.go:358] Setting ErrFile to fd 2...
	I0819 10:45:21.383030  107271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:21.383560  107271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 10:45:21.384326  107271 out.go:352] Setting JSON to false
	I0819 10:45:21.385279  107271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1667,"bootTime":1724062654,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 10:45:21.385349  107271 start.go:139] virtualization: kvm guest
	I0819 10:45:21.387367  107271 out.go:177] * [addons-479471] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 10:45:21.388774  107271 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 10:45:21.388854  107271 notify.go:220] Checking for updates...
	I0819 10:45:21.391390  107271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:45:21.392668  107271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 10:45:21.394036  107271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 10:45:21.395271  107271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 10:45:21.396446  107271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:45:21.397906  107271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:45:21.433537  107271 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 10:45:21.434692  107271 start.go:297] selected driver: kvm2
	I0819 10:45:21.434725  107271 start.go:901] validating driver "kvm2" against <nil>
	I0819 10:45:21.434744  107271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:45:21.436029  107271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:45:21.436157  107271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 10:45:21.453514  107271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 10:45:21.453592  107271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:45:21.453790  107271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:45:21.453854  107271 cni.go:84] Creating CNI manager for ""
	I0819 10:45:21.453867  107271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 10:45:21.453874  107271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 10:45:21.453930  107271 start.go:340] cluster config:
	{Name:addons-479471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-479471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:45:21.454022  107271 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:45:21.455653  107271 out.go:177] * Starting "addons-479471" primary control-plane node in "addons-479471" cluster
	I0819 10:45:21.457034  107271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:45:21.457085  107271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:45:21.457097  107271 cache.go:56] Caching tarball of preloaded images
	I0819 10:45:21.457194  107271 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 10:45:21.457205  107271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 10:45:21.457512  107271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/config.json ...
	I0819 10:45:21.457540  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/config.json: {Name:mk1b0d127e851a7c9440226e92e36449e84d4a7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:21.457701  107271 start.go:360] acquireMachinesLock for addons-479471: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:45:21.457748  107271 start.go:364] duration metric: took 32.14µs to acquireMachinesLock for "addons-479471"
	I0819 10:45:21.457767  107271 start.go:93] Provisioning new machine with config: &{Name:addons-479471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-479471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 10:45:21.457833  107271 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 10:45:21.459446  107271 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 10:45:21.459578  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:45:21.459607  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:45:21.474937  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0819 10:45:21.475455  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:21.476155  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:45:21.476181  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:21.476584  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:21.476855  107271 main.go:141] libmachine: (addons-479471) Calling .GetMachineName
	I0819 10:45:21.477038  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:45:21.477259  107271 start.go:159] libmachine.API.Create for "addons-479471" (driver="kvm2")
	I0819 10:45:21.477289  107271 client.go:168] LocalClient.Create starting
	I0819 10:45:21.477337  107271 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 10:45:21.611004  107271 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 10:45:21.732108  107271 main.go:141] libmachine: Running pre-create checks...
	I0819 10:45:21.732134  107271 main.go:141] libmachine: (addons-479471) Calling .PreCreateCheck
	I0819 10:45:21.732717  107271 main.go:141] libmachine: (addons-479471) Calling .GetConfigRaw
	I0819 10:45:21.733200  107271 main.go:141] libmachine: Creating machine...
	I0819 10:45:21.733214  107271 main.go:141] libmachine: (addons-479471) Calling .Create
	I0819 10:45:21.733378  107271 main.go:141] libmachine: (addons-479471) Creating KVM machine...
	I0819 10:45:21.734524  107271 main.go:141] libmachine: (addons-479471) DBG | found existing default KVM network
	I0819 10:45:21.735302  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:21.735157  107293 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0819 10:45:21.735375  107271 main.go:141] libmachine: (addons-479471) DBG | created network xml: 
	I0819 10:45:21.735404  107271 main.go:141] libmachine: (addons-479471) DBG | <network>
	I0819 10:45:21.735417  107271 main.go:141] libmachine: (addons-479471) DBG |   <name>mk-addons-479471</name>
	I0819 10:45:21.735428  107271 main.go:141] libmachine: (addons-479471) DBG |   <dns enable='no'/>
	I0819 10:45:21.735434  107271 main.go:141] libmachine: (addons-479471) DBG |   
	I0819 10:45:21.735452  107271 main.go:141] libmachine: (addons-479471) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 10:45:21.735467  107271 main.go:141] libmachine: (addons-479471) DBG |     <dhcp>
	I0819 10:45:21.735477  107271 main.go:141] libmachine: (addons-479471) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 10:45:21.735489  107271 main.go:141] libmachine: (addons-479471) DBG |     </dhcp>
	I0819 10:45:21.735499  107271 main.go:141] libmachine: (addons-479471) DBG |   </ip>
	I0819 10:45:21.735509  107271 main.go:141] libmachine: (addons-479471) DBG |   
	I0819 10:45:21.735518  107271 main.go:141] libmachine: (addons-479471) DBG | </network>
	I0819 10:45:21.735526  107271 main.go:141] libmachine: (addons-479471) DBG | 
	I0819 10:45:21.741165  107271 main.go:141] libmachine: (addons-479471) DBG | trying to create private KVM network mk-addons-479471 192.168.39.0/24...
	I0819 10:45:21.813968  107271 main.go:141] libmachine: (addons-479471) DBG | private KVM network mk-addons-479471 192.168.39.0/24 created
	I0819 10:45:21.814010  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:21.813922  107293 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 10:45:21.814032  107271 main.go:141] libmachine: (addons-479471) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471 ...
	I0819 10:45:21.814050  107271 main.go:141] libmachine: (addons-479471) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 10:45:21.814143  107271 main.go:141] libmachine: (addons-479471) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:45:22.068674  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:22.068537  107293 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa...
	I0819 10:45:22.228256  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:22.228089  107293 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/addons-479471.rawdisk...
	I0819 10:45:22.228288  107271 main.go:141] libmachine: (addons-479471) DBG | Writing magic tar header
	I0819 10:45:22.228311  107271 main.go:141] libmachine: (addons-479471) DBG | Writing SSH key tar header
	I0819 10:45:22.228324  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:22.228245  107293 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471 ...
	I0819 10:45:22.228338  107271 main.go:141] libmachine: (addons-479471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471
	I0819 10:45:22.228360  107271 main.go:141] libmachine: (addons-479471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 10:45:22.228369  107271 main.go:141] libmachine: (addons-479471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 10:45:22.228417  107271 main.go:141] libmachine: (addons-479471) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471 (perms=drwx------)
	I0819 10:45:22.228449  107271 main.go:141] libmachine: (addons-479471) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 10:45:22.228461  107271 main.go:141] libmachine: (addons-479471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 10:45:22.228484  107271 main.go:141] libmachine: (addons-479471) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 10:45:22.228503  107271 main.go:141] libmachine: (addons-479471) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 10:45:22.228511  107271 main.go:141] libmachine: (addons-479471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 10:45:22.228519  107271 main.go:141] libmachine: (addons-479471) DBG | Checking permissions on dir: /home/jenkins
	I0819 10:45:22.228527  107271 main.go:141] libmachine: (addons-479471) DBG | Checking permissions on dir: /home
	I0819 10:45:22.228534  107271 main.go:141] libmachine: (addons-479471) DBG | Skipping /home - not owner
	I0819 10:45:22.228542  107271 main.go:141] libmachine: (addons-479471) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 10:45:22.228548  107271 main.go:141] libmachine: (addons-479471) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 10:45:22.228554  107271 main.go:141] libmachine: (addons-479471) Creating domain...
	I0819 10:45:22.229699  107271 main.go:141] libmachine: (addons-479471) define libvirt domain using xml: 
	I0819 10:45:22.229736  107271 main.go:141] libmachine: (addons-479471) <domain type='kvm'>
	I0819 10:45:22.229745  107271 main.go:141] libmachine: (addons-479471)   <name>addons-479471</name>
	I0819 10:45:22.229759  107271 main.go:141] libmachine: (addons-479471)   <memory unit='MiB'>4000</memory>
	I0819 10:45:22.229795  107271 main.go:141] libmachine: (addons-479471)   <vcpu>2</vcpu>
	I0819 10:45:22.229819  107271 main.go:141] libmachine: (addons-479471)   <features>
	I0819 10:45:22.229832  107271 main.go:141] libmachine: (addons-479471)     <acpi/>
	I0819 10:45:22.229845  107271 main.go:141] libmachine: (addons-479471)     <apic/>
	I0819 10:45:22.229866  107271 main.go:141] libmachine: (addons-479471)     <pae/>
	I0819 10:45:22.229882  107271 main.go:141] libmachine: (addons-479471)     
	I0819 10:45:22.229895  107271 main.go:141] libmachine: (addons-479471)   </features>
	I0819 10:45:22.229909  107271 main.go:141] libmachine: (addons-479471)   <cpu mode='host-passthrough'>
	I0819 10:45:22.229924  107271 main.go:141] libmachine: (addons-479471)   
	I0819 10:45:22.229942  107271 main.go:141] libmachine: (addons-479471)   </cpu>
	I0819 10:45:22.229954  107271 main.go:141] libmachine: (addons-479471)   <os>
	I0819 10:45:22.229965  107271 main.go:141] libmachine: (addons-479471)     <type>hvm</type>
	I0819 10:45:22.229973  107271 main.go:141] libmachine: (addons-479471)     <boot dev='cdrom'/>
	I0819 10:45:22.229980  107271 main.go:141] libmachine: (addons-479471)     <boot dev='hd'/>
	I0819 10:45:22.229985  107271 main.go:141] libmachine: (addons-479471)     <bootmenu enable='no'/>
	I0819 10:45:22.229994  107271 main.go:141] libmachine: (addons-479471)   </os>
	I0819 10:45:22.230003  107271 main.go:141] libmachine: (addons-479471)   <devices>
	I0819 10:45:22.230018  107271 main.go:141] libmachine: (addons-479471)     <disk type='file' device='cdrom'>
	I0819 10:45:22.230035  107271 main.go:141] libmachine: (addons-479471)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/boot2docker.iso'/>
	I0819 10:45:22.230047  107271 main.go:141] libmachine: (addons-479471)       <target dev='hdc' bus='scsi'/>
	I0819 10:45:22.230059  107271 main.go:141] libmachine: (addons-479471)       <readonly/>
	I0819 10:45:22.230067  107271 main.go:141] libmachine: (addons-479471)     </disk>
	I0819 10:45:22.230080  107271 main.go:141] libmachine: (addons-479471)     <disk type='file' device='disk'>
	I0819 10:45:22.230093  107271 main.go:141] libmachine: (addons-479471)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 10:45:22.230109  107271 main.go:141] libmachine: (addons-479471)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/addons-479471.rawdisk'/>
	I0819 10:45:22.230121  107271 main.go:141] libmachine: (addons-479471)       <target dev='hda' bus='virtio'/>
	I0819 10:45:22.230128  107271 main.go:141] libmachine: (addons-479471)     </disk>
	I0819 10:45:22.230140  107271 main.go:141] libmachine: (addons-479471)     <interface type='network'>
	I0819 10:45:22.230150  107271 main.go:141] libmachine: (addons-479471)       <source network='mk-addons-479471'/>
	I0819 10:45:22.230160  107271 main.go:141] libmachine: (addons-479471)       <model type='virtio'/>
	I0819 10:45:22.230165  107271 main.go:141] libmachine: (addons-479471)     </interface>
	I0819 10:45:22.230172  107271 main.go:141] libmachine: (addons-479471)     <interface type='network'>
	I0819 10:45:22.230177  107271 main.go:141] libmachine: (addons-479471)       <source network='default'/>
	I0819 10:45:22.230185  107271 main.go:141] libmachine: (addons-479471)       <model type='virtio'/>
	I0819 10:45:22.230191  107271 main.go:141] libmachine: (addons-479471)     </interface>
	I0819 10:45:22.230220  107271 main.go:141] libmachine: (addons-479471)     <serial type='pty'>
	I0819 10:45:22.230241  107271 main.go:141] libmachine: (addons-479471)       <target port='0'/>
	I0819 10:45:22.230252  107271 main.go:141] libmachine: (addons-479471)     </serial>
	I0819 10:45:22.230261  107271 main.go:141] libmachine: (addons-479471)     <console type='pty'>
	I0819 10:45:22.230273  107271 main.go:141] libmachine: (addons-479471)       <target type='serial' port='0'/>
	I0819 10:45:22.230280  107271 main.go:141] libmachine: (addons-479471)     </console>
	I0819 10:45:22.230299  107271 main.go:141] libmachine: (addons-479471)     <rng model='virtio'>
	I0819 10:45:22.230310  107271 main.go:141] libmachine: (addons-479471)       <backend model='random'>/dev/random</backend>
	I0819 10:45:22.230316  107271 main.go:141] libmachine: (addons-479471)     </rng>
	I0819 10:45:22.230323  107271 main.go:141] libmachine: (addons-479471)     
	I0819 10:45:22.230328  107271 main.go:141] libmachine: (addons-479471)     
	I0819 10:45:22.230335  107271 main.go:141] libmachine: (addons-479471)   </devices>
	I0819 10:45:22.230341  107271 main.go:141] libmachine: (addons-479471) </domain>
	I0819 10:45:22.230348  107271 main.go:141] libmachine: (addons-479471) 
	I0819 10:45:22.236931  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:46:f7:57 in network default
	I0819 10:45:22.238875  107271 main.go:141] libmachine: (addons-479471) Ensuring networks are active...
	I0819 10:45:22.238910  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:22.239756  107271 main.go:141] libmachine: (addons-479471) Ensuring network default is active
	I0819 10:45:22.240067  107271 main.go:141] libmachine: (addons-479471) Ensuring network mk-addons-479471 is active
	I0819 10:45:22.240595  107271 main.go:141] libmachine: (addons-479471) Getting domain xml...
	I0819 10:45:22.241185  107271 main.go:141] libmachine: (addons-479471) Creating domain...
	I0819 10:45:23.657316  107271 main.go:141] libmachine: (addons-479471) Waiting to get IP...
	I0819 10:45:23.658188  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:23.658675  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:23.658733  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:23.658664  107293 retry.go:31] will retry after 269.945923ms: waiting for machine to come up
	I0819 10:45:23.930290  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:23.930734  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:23.930760  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:23.930690  107293 retry.go:31] will retry after 252.592373ms: waiting for machine to come up
	I0819 10:45:24.185168  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:24.185554  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:24.185577  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:24.185496  107293 retry.go:31] will retry after 368.501634ms: waiting for machine to come up
	I0819 10:45:24.556123  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:24.556665  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:24.556689  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:24.556623  107293 retry.go:31] will retry after 573.645742ms: waiting for machine to come up
	I0819 10:45:25.131508  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:25.131947  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:25.131977  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:25.131897  107293 retry.go:31] will retry after 609.138669ms: waiting for machine to come up
	I0819 10:45:25.742769  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:25.743134  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:25.743164  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:25.743066  107293 retry.go:31] will retry after 815.009545ms: waiting for machine to come up
	I0819 10:45:26.560183  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:26.560528  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:26.560553  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:26.560489  107293 retry.go:31] will retry after 946.790721ms: waiting for machine to come up
	I0819 10:45:27.508556  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:27.508906  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:27.508931  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:27.508868  107293 retry.go:31] will retry after 1.357958756s: waiting for machine to come up
	I0819 10:45:28.868161  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:28.868609  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:28.868640  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:28.868570  107293 retry.go:31] will retry after 1.394831065s: waiting for machine to come up
	I0819 10:45:30.265041  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:30.265425  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:30.265453  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:30.265371  107293 retry.go:31] will retry after 2.055582314s: waiting for machine to come up
	I0819 10:45:32.322327  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:32.322701  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:32.322731  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:32.322638  107293 retry.go:31] will retry after 2.351664393s: waiting for machine to come up
	I0819 10:45:34.677251  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:34.677869  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:34.677909  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:34.677813  107293 retry.go:31] will retry after 2.48538757s: waiting for machine to come up
	I0819 10:45:37.164795  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:37.165227  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find current IP address of domain addons-479471 in network mk-addons-479471
	I0819 10:45:37.165260  107271 main.go:141] libmachine: (addons-479471) DBG | I0819 10:45:37.165158  107293 retry.go:31] will retry after 4.298060424s: waiting for machine to come up
	I0819 10:45:41.468799  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:41.469394  107271 main.go:141] libmachine: (addons-479471) Found IP for machine: 192.168.39.182
	I0819 10:45:41.469423  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has current primary IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:41.469432  107271 main.go:141] libmachine: (addons-479471) Reserving static IP address...
	I0819 10:45:41.469877  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find host DHCP lease matching {name: "addons-479471", mac: "52:54:00:e4:68:92", ip: "192.168.39.182"} in network mk-addons-479471
	I0819 10:45:41.544601  107271 main.go:141] libmachine: (addons-479471) Reserved static IP address: 192.168.39.182
	I0819 10:45:41.544626  107271 main.go:141] libmachine: (addons-479471) Waiting for SSH to be available...
	I0819 10:45:41.544636  107271 main.go:141] libmachine: (addons-479471) DBG | Getting to WaitForSSH function...
	I0819 10:45:41.547371  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:41.547863  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471
	I0819 10:45:41.547893  107271 main.go:141] libmachine: (addons-479471) DBG | unable to find defined IP address of network mk-addons-479471 interface with MAC address 52:54:00:e4:68:92
	I0819 10:45:41.548130  107271 main.go:141] libmachine: (addons-479471) DBG | Using SSH client type: external
	I0819 10:45:41.548159  107271 main.go:141] libmachine: (addons-479471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa (-rw-------)
	I0819 10:45:41.548206  107271 main.go:141] libmachine: (addons-479471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 10:45:41.548228  107271 main.go:141] libmachine: (addons-479471) DBG | About to run SSH command:
	I0819 10:45:41.548264  107271 main.go:141] libmachine: (addons-479471) DBG | exit 0
	I0819 10:45:41.560118  107271 main.go:141] libmachine: (addons-479471) DBG | SSH cmd err, output: exit status 255: 
	I0819 10:45:41.560143  107271 main.go:141] libmachine: (addons-479471) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 10:45:41.560150  107271 main.go:141] libmachine: (addons-479471) DBG | command : exit 0
	I0819 10:45:41.560155  107271 main.go:141] libmachine: (addons-479471) DBG | err     : exit status 255
	I0819 10:45:41.560164  107271 main.go:141] libmachine: (addons-479471) DBG | output  : 
	I0819 10:45:44.562317  107271 main.go:141] libmachine: (addons-479471) DBG | Getting to WaitForSSH function...
	I0819 10:45:44.565294  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:44.565642  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:44.565674  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:44.565812  107271 main.go:141] libmachine: (addons-479471) DBG | Using SSH client type: external
	I0819 10:45:44.565839  107271 main.go:141] libmachine: (addons-479471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa (-rw-------)
	I0819 10:45:44.565885  107271 main.go:141] libmachine: (addons-479471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 10:45:44.565903  107271 main.go:141] libmachine: (addons-479471) DBG | About to run SSH command:
	I0819 10:45:44.565917  107271 main.go:141] libmachine: (addons-479471) DBG | exit 0
	I0819 10:45:44.691847  107271 main.go:141] libmachine: (addons-479471) DBG | SSH cmd err, output: <nil>: 
	I0819 10:45:44.692103  107271 main.go:141] libmachine: (addons-479471) KVM machine creation complete!
	I0819 10:45:44.692409  107271 main.go:141] libmachine: (addons-479471) Calling .GetConfigRaw
	I0819 10:45:44.693018  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:45:44.693249  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:45:44.693448  107271 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:45:44.693465  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:45:44.694773  107271 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:45:44.694791  107271 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:45:44.694805  107271 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:45:44.694814  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:44.696970  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:44.697866  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:44.697919  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:44.698008  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:44.698666  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:44.698954  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:44.699137  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:44.699296  107271 main.go:141] libmachine: Using SSH client type: native
	I0819 10:45:44.699494  107271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0819 10:45:44.699507  107271 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:45:44.803213  107271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:45:44.803249  107271 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:45:44.803258  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:44.806082  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:44.806664  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:44.806697  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:44.806908  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:44.807135  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:44.807292  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:44.807423  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:44.807594  107271 main.go:141] libmachine: Using SSH client type: native
	I0819 10:45:44.807805  107271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0819 10:45:44.807817  107271 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:45:44.912164  107271 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:45:44.912252  107271 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:45:44.912264  107271 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:45:44.912271  107271 main.go:141] libmachine: (addons-479471) Calling .GetMachineName
	I0819 10:45:44.912538  107271 buildroot.go:166] provisioning hostname "addons-479471"
	I0819 10:45:44.912565  107271 main.go:141] libmachine: (addons-479471) Calling .GetMachineName
	I0819 10:45:44.912766  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:44.915061  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:44.915430  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:44.915461  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:44.915645  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:44.915864  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:44.916032  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:44.916162  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:44.916304  107271 main.go:141] libmachine: Using SSH client type: native
	I0819 10:45:44.916540  107271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0819 10:45:44.916560  107271 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-479471 && echo "addons-479471" | sudo tee /etc/hostname
	I0819 10:45:45.035050  107271 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-479471
	
	I0819 10:45:45.035096  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:45.037848  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.038164  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.038197  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.038358  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:45.038577  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.038733  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.038937  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:45.039093  107271 main.go:141] libmachine: Using SSH client type: native
	I0819 10:45:45.039266  107271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0819 10:45:45.039282  107271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-479471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-479471/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-479471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:45:45.152220  107271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:45:45.152256  107271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 10:45:45.152282  107271 buildroot.go:174] setting up certificates
	I0819 10:45:45.152297  107271 provision.go:84] configureAuth start
	I0819 10:45:45.152314  107271 main.go:141] libmachine: (addons-479471) Calling .GetMachineName
	I0819 10:45:45.152629  107271 main.go:141] libmachine: (addons-479471) Calling .GetIP
	I0819 10:45:45.155179  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.155504  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.155538  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.155666  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:45.157819  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.158217  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.158248  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.158438  107271 provision.go:143] copyHostCerts
	I0819 10:45:45.158532  107271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 10:45:45.158699  107271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 10:45:45.158804  107271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 10:45:45.158892  107271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.addons-479471 san=[127.0.0.1 192.168.39.182 addons-479471 localhost minikube]
	I0819 10:45:45.264567  107271 provision.go:177] copyRemoteCerts
	I0819 10:45:45.264632  107271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:45:45.264658  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:45.267534  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.267879  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.267909  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.268136  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:45.268342  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.268505  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:45.268672  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:45:45.349647  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:45:45.373029  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:45:45.396672  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:45:45.419969  107271 provision.go:87] duration metric: took 267.655054ms to configureAuth
	I0819 10:45:45.420003  107271 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:45:45.420204  107271 config.go:182] Loaded profile config "addons-479471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:45:45.420295  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:45.422877  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.423187  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.423208  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.423389  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:45.423612  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.423821  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.423987  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:45.424154  107271 main.go:141] libmachine: Using SSH client type: native
	I0819 10:45:45.424337  107271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0819 10:45:45.424351  107271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 10:45:45.683292  107271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 10:45:45.683328  107271 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:45:45.683340  107271 main.go:141] libmachine: (addons-479471) Calling .GetURL
	I0819 10:45:45.684718  107271 main.go:141] libmachine: (addons-479471) DBG | Using libvirt version 6000000
	I0819 10:45:45.686670  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.686948  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.686977  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.687117  107271 main.go:141] libmachine: Docker is up and running!
	I0819 10:45:45.687135  107271 main.go:141] libmachine: Reticulating splines...
	I0819 10:45:45.687144  107271 client.go:171] duration metric: took 24.209845734s to LocalClient.Create
	I0819 10:45:45.687168  107271 start.go:167] duration metric: took 24.209911079s to libmachine.API.Create "addons-479471"
	I0819 10:45:45.687179  107271 start.go:293] postStartSetup for "addons-479471" (driver="kvm2")
	I0819 10:45:45.687190  107271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:45:45.687211  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:45:45.687504  107271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:45:45.687531  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:45.690058  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.690411  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.690443  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.690584  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:45.690797  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.691005  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:45.691138  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:45:45.774406  107271 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:45:45.778679  107271 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:45:45.778724  107271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 10:45:45.778824  107271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 10:45:45.778851  107271 start.go:296] duration metric: took 91.66741ms for postStartSetup
	I0819 10:45:45.778889  107271 main.go:141] libmachine: (addons-479471) Calling .GetConfigRaw
	I0819 10:45:45.779543  107271 main.go:141] libmachine: (addons-479471) Calling .GetIP
	I0819 10:45:45.782074  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.782428  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.782466  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.782714  107271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/config.json ...
	I0819 10:45:45.782925  107271 start.go:128] duration metric: took 24.325074593s to createHost
	I0819 10:45:45.782950  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:45.785299  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.785620  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.785652  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.785875  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:45.786089  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.786262  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.786397  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:45.786556  107271 main.go:141] libmachine: Using SSH client type: native
	I0819 10:45:45.786722  107271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0819 10:45:45.786734  107271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:45:45.892451  107271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724064345.867464314
	
	I0819 10:45:45.892479  107271 fix.go:216] guest clock: 1724064345.867464314
	I0819 10:45:45.892487  107271 fix.go:229] Guest: 2024-08-19 10:45:45.867464314 +0000 UTC Remote: 2024-08-19 10:45:45.782939922 +0000 UTC m=+24.436986476 (delta=84.524392ms)
	I0819 10:45:45.892516  107271 fix.go:200] guest clock delta is within tolerance: 84.524392ms
	I0819 10:45:45.892522  107271 start.go:83] releasing machines lock for "addons-479471", held for 24.434764341s
	I0819 10:45:45.892541  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:45:45.892879  107271 main.go:141] libmachine: (addons-479471) Calling .GetIP
	I0819 10:45:45.895685  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.896119  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.896142  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.896324  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:45:45.896854  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:45:45.897031  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:45:45.897106  107271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:45:45.897170  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:45.897314  107271 ssh_runner.go:195] Run: cat /version.json
	I0819 10:45:45.897339  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:45:45.899816  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.900060  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.900188  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.900215  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.900327  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:45.900457  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:45.900481  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:45.900534  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.900635  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:45:45.900705  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:45.900777  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:45:45.900837  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:45:45.900870  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:45:45.901013  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:45:45.976536  107271 ssh_runner.go:195] Run: systemctl --version
	I0819 10:45:45.999888  107271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 10:45:46.154797  107271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:45:46.160855  107271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:45:46.160938  107271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:45:46.176195  107271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:45:46.176226  107271 start.go:495] detecting cgroup driver to use...
	I0819 10:45:46.176326  107271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:45:46.192350  107271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:45:46.206752  107271 docker.go:217] disabling cri-docker service (if available) ...
	I0819 10:45:46.206811  107271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 10:45:46.221022  107271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 10:45:46.235307  107271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 10:45:46.352296  107271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 10:45:46.516282  107271 docker.go:233] disabling docker service ...
	I0819 10:45:46.516372  107271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 10:45:46.530693  107271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 10:45:46.544111  107271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 10:45:46.673700  107271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 10:45:46.787412  107271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 10:45:46.809078  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:45:46.827102  107271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 10:45:46.827177  107271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:45:46.837668  107271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 10:45:46.837763  107271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:45:46.848698  107271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:45:46.859295  107271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:45:46.870232  107271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:45:46.881361  107271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:45:46.892156  107271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:45:46.909523  107271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 10:45:46.920475  107271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:45:46.930265  107271 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 10:45:46.930348  107271 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 10:45:46.943916  107271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:45:46.954022  107271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:45:47.081403  107271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 10:45:47.213406  107271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 10:45:47.213517  107271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 10:45:47.218213  107271 start.go:563] Will wait 60s for crictl version
	I0819 10:45:47.218301  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:45:47.222259  107271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:45:47.258835  107271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 10:45:47.258962  107271 ssh_runner.go:195] Run: crio --version
	I0819 10:45:47.287269  107271 ssh_runner.go:195] Run: crio --version
	I0819 10:45:47.317176  107271 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 10:45:47.318649  107271 main.go:141] libmachine: (addons-479471) Calling .GetIP
	I0819 10:45:47.321298  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:47.321678  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:45:47.321701  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:45:47.322040  107271 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 10:45:47.326191  107271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:45:47.338261  107271 kubeadm.go:883] updating cluster {Name:addons-479471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-479471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:45:47.338399  107271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:45:47.338449  107271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 10:45:47.369148  107271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 10:45:47.369218  107271 ssh_runner.go:195] Run: which lz4
	I0819 10:45:47.372969  107271 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:45:47.377014  107271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:45:47.377058  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 10:45:48.579851  107271 crio.go:462] duration metric: took 1.206910522s to copy over tarball
	I0819 10:45:48.579937  107271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:45:50.779961  107271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199992515s)
	I0819 10:45:50.780001  107271 crio.go:469] duration metric: took 2.200111307s to extract the tarball
	I0819 10:45:50.780013  107271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:45:50.816156  107271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 10:45:50.854337  107271 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 10:45:50.854370  107271 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:45:50.854381  107271 kubeadm.go:934] updating node { 192.168.39.182 8443 v1.31.0 crio true true} ...
	I0819 10:45:50.854523  107271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-479471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-479471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:45:50.854610  107271 ssh_runner.go:195] Run: crio config
	I0819 10:45:50.902453  107271 cni.go:84] Creating CNI manager for ""
	I0819 10:45:50.902475  107271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 10:45:50.902486  107271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:45:50.902509  107271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-479471 NodeName:addons-479471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:45:50.902657  107271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-479471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:45:50.902723  107271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:45:50.912359  107271 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:45:50.912434  107271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 10:45:50.921650  107271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 10:45:50.939625  107271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:45:50.957247  107271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 10:45:50.975402  107271 ssh_runner.go:195] Run: grep 192.168.39.182	control-plane.minikube.internal$ /etc/hosts
	I0819 10:45:50.979263  107271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:45:50.991256  107271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:45:51.128472  107271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:45:51.145238  107271 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471 for IP: 192.168.39.182
	I0819 10:45:51.145271  107271 certs.go:194] generating shared ca certs ...
	I0819 10:45:51.145295  107271 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:51.145476  107271 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 10:45:51.462938  107271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt ...
	I0819 10:45:51.462979  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt: {Name:mk79a0ea2fc61e3037847b3050468535c5e713a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:51.463149  107271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key ...
	I0819 10:45:51.463160  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key: {Name:mk290032c17430310175418e8f7b55f955cc0259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:51.463231  107271 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 10:45:51.667049  107271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt ...
	I0819 10:45:51.667082  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt: {Name:mk230122031844d57171d331238a4e5c6e5358bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:51.667263  107271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key ...
	I0819 10:45:51.667277  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key: {Name:mk672ab1c25465e2379aa249d171cc9421b08c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:51.667370  107271 certs.go:256] generating profile certs ...
	I0819 10:45:51.667426  107271 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/client.key
	I0819 10:45:51.667452  107271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/client.crt with IP's: []
	I0819 10:45:51.737080  107271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/client.crt ...
	I0819 10:45:51.737113  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/client.crt: {Name:mk5583626d43fbdb96151c1ce39a6e2fdb74d9ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:51.737307  107271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/client.key ...
	I0819 10:45:51.737334  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/client.key: {Name:mka03058565b2b0debc2ba1733a531290265bed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:51.737436  107271 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.key.413e8488
	I0819 10:45:51.737456  107271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.crt.413e8488 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182]
	I0819 10:45:52.034860  107271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.crt.413e8488 ...
	I0819 10:45:52.034901  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.crt.413e8488: {Name:mk1a63ff4a7fe439e1a67008d1a855d461b4d8d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:52.035110  107271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.key.413e8488 ...
	I0819 10:45:52.035129  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.key.413e8488: {Name:mk07a88c982ccb4d4cb0449f7ea8173f40f87eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:52.035226  107271 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.crt.413e8488 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.crt
	I0819 10:45:52.035306  107271 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.key.413e8488 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.key
	I0819 10:45:52.035354  107271 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/proxy-client.key
	I0819 10:45:52.035371  107271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/proxy-client.crt with IP's: []
	I0819 10:45:52.180649  107271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/proxy-client.crt ...
	I0819 10:45:52.180682  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/proxy-client.crt: {Name:mkf4a1b0242f2d5467fe7ae04bfebed9c9a17f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:52.180871  107271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/proxy-client.key ...
	I0819 10:45:52.180885  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/proxy-client.key: {Name:mkc2d248f2b85667c2167a99f8c2d118083bfc48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:45:52.181082  107271 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 10:45:52.181118  107271 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:45:52.181145  107271 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:45:52.181169  107271 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 10:45:52.181913  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:45:52.209189  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:45:52.233657  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:45:52.258360  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 10:45:52.281996  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 10:45:52.305901  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:45:52.330066  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:45:52.353809  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/addons-479471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 10:45:52.377322  107271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:45:52.400385  107271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:45:52.417058  107271 ssh_runner.go:195] Run: openssl version
	I0819 10:45:52.423120  107271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:45:52.434934  107271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:45:52.440038  107271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:45:52.440101  107271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:45:52.445827  107271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:45:52.456845  107271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:45:52.461158  107271 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:45:52.461226  107271 kubeadm.go:392] StartCluster: {Name:addons-479471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-479471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:45:52.461324  107271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 10:45:52.461385  107271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 10:45:52.497834  107271 cri.go:89] found id: ""
	I0819 10:45:52.497928  107271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:45:52.510398  107271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:45:52.526330  107271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:45:52.540807  107271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:45:52.540829  107271 kubeadm.go:157] found existing configuration files:
	
	I0819 10:45:52.540888  107271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:45:52.551706  107271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:45:52.551786  107271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:45:52.562816  107271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:45:52.572373  107271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:45:52.572430  107271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:45:52.582334  107271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:45:52.591541  107271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:45:52.591622  107271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:45:52.601570  107271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:45:52.612237  107271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:45:52.612356  107271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:45:52.622499  107271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:45:52.667622  107271 kubeadm.go:310] W0819 10:45:52.650323     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:45:52.668190  107271 kubeadm.go:310] W0819 10:45:52.650978     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:45:52.756856  107271 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:46:02.298445  107271 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:46:02.298526  107271 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:46:02.298660  107271 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:46:02.298754  107271 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:46:02.298876  107271 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:46:02.298970  107271 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:46:02.300580  107271 out.go:235]   - Generating certificates and keys ...
	I0819 10:46:02.300658  107271 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:46:02.300718  107271 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:46:02.300794  107271 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:46:02.300866  107271 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:46:02.300939  107271 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:46:02.301003  107271 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:46:02.301077  107271 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:46:02.301240  107271 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-479471 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0819 10:46:02.301302  107271 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:46:02.301453  107271 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-479471 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0819 10:46:02.301551  107271 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:46:02.301654  107271 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:46:02.301719  107271 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:46:02.301795  107271 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:46:02.301873  107271 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:46:02.301941  107271 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:46:02.302011  107271 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:46:02.302107  107271 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:46:02.302182  107271 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:46:02.302297  107271 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:46:02.302394  107271 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:46:02.303975  107271 out.go:235]   - Booting up control plane ...
	I0819 10:46:02.304074  107271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:46:02.304156  107271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:46:02.304234  107271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:46:02.304346  107271 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:46:02.304472  107271 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:46:02.304532  107271 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:46:02.304714  107271 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:46:02.304814  107271 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:46:02.304897  107271 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00324429s
	I0819 10:46:02.305011  107271 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:46:02.305101  107271 kubeadm.go:310] [api-check] The API server is healthy after 4.501597591s
	I0819 10:46:02.305228  107271 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:46:02.305398  107271 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:46:02.305470  107271 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:46:02.305669  107271 kubeadm.go:310] [mark-control-plane] Marking the node addons-479471 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:46:02.305752  107271 kubeadm.go:310] [bootstrap-token] Using token: 6s0zpz.cx6hutmt7cdqwzx3
	I0819 10:46:02.307319  107271 out.go:235]   - Configuring RBAC rules ...
	I0819 10:46:02.307409  107271 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:46:02.307530  107271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:46:02.307762  107271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:46:02.307907  107271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:46:02.308059  107271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:46:02.308155  107271 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:46:02.308284  107271 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:46:02.308325  107271 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:46:02.308396  107271 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:46:02.308415  107271 kubeadm.go:310] 
	I0819 10:46:02.308475  107271 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:46:02.308482  107271 kubeadm.go:310] 
	I0819 10:46:02.308557  107271 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:46:02.308564  107271 kubeadm.go:310] 
	I0819 10:46:02.308599  107271 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:46:02.308656  107271 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:46:02.308701  107271 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:46:02.308707  107271 kubeadm.go:310] 
	I0819 10:46:02.308785  107271 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:46:02.308796  107271 kubeadm.go:310] 
	I0819 10:46:02.308868  107271 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:46:02.308877  107271 kubeadm.go:310] 
	I0819 10:46:02.308921  107271 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:46:02.309026  107271 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:46:02.309116  107271 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:46:02.309130  107271 kubeadm.go:310] 
	I0819 10:46:02.309257  107271 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:46:02.309351  107271 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:46:02.309358  107271 kubeadm.go:310] 
	I0819 10:46:02.309462  107271 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6s0zpz.cx6hutmt7cdqwzx3 \
	I0819 10:46:02.309638  107271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 \
	I0819 10:46:02.309676  107271 kubeadm.go:310] 	--control-plane 
	I0819 10:46:02.309684  107271 kubeadm.go:310] 
	I0819 10:46:02.309753  107271 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:46:02.309759  107271 kubeadm.go:310] 
	I0819 10:46:02.309871  107271 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6s0zpz.cx6hutmt7cdqwzx3 \
	I0819 10:46:02.310041  107271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 
	I0819 10:46:02.310060  107271 cni.go:84] Creating CNI manager for ""
	I0819 10:46:02.310072  107271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 10:46:02.311889  107271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 10:46:02.313489  107271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 10:46:02.326015  107271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 10:46:02.345797  107271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:46:02.345885  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:02.345898  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-479471 minikube.k8s.io/updated_at=2024_08_19T10_46_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=addons-479471 minikube.k8s.io/primary=true
	I0819 10:46:02.372583  107271 ops.go:34] apiserver oom_adj: -16
	I0819 10:46:02.482247  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:02.982982  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:03.483206  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:03.982373  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:04.482650  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:04.982602  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:05.482424  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:05.983033  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:06.483165  107271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:46:06.570923  107271 kubeadm.go:1113] duration metric: took 4.225107931s to wait for elevateKubeSystemPrivileges
	I0819 10:46:06.570970  107271 kubeadm.go:394] duration metric: took 14.109752236s to StartCluster
	I0819 10:46:06.570999  107271 settings.go:142] acquiring lock: {Name:mk5d5753fc545a0b5fdfa44a1e5cbc5d198d9dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:46:06.571165  107271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 10:46:06.571548  107271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/kubeconfig: {Name:mk73914d2bd0db664ade6c952753a7dd30404784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:46:06.571773  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:46:06.571811  107271 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 10:46:06.571883  107271 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 10:46:06.571989  107271 addons.go:69] Setting yakd=true in profile "addons-479471"
	I0819 10:46:06.571995  107271 addons.go:69] Setting cloud-spanner=true in profile "addons-479471"
	I0819 10:46:06.572013  107271 addons.go:69] Setting inspektor-gadget=true in profile "addons-479471"
	I0819 10:46:06.572036  107271 addons.go:234] Setting addon cloud-spanner=true in "addons-479471"
	I0819 10:46:06.572039  107271 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-479471"
	I0819 10:46:06.572055  107271 addons.go:234] Setting addon inspektor-gadget=true in "addons-479471"
	I0819 10:46:06.572065  107271 addons.go:69] Setting registry=true in profile "addons-479471"
	I0819 10:46:06.572076  107271 config.go:182] Loaded profile config "addons-479471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:46:06.572088  107271 addons.go:69] Setting storage-provisioner=true in profile "addons-479471"
	I0819 10:46:06.572098  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.572104  107271 addons.go:234] Setting addon storage-provisioner=true in "addons-479471"
	I0819 10:46:06.572108  107271 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-479471"
	I0819 10:46:06.572080  107271 addons.go:234] Setting addon registry=true in "addons-479471"
	I0819 10:46:06.572134  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.572144  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.572145  107271 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-479471"
	I0819 10:46:06.572196  107271 addons.go:69] Setting helm-tiller=true in profile "addons-479471"
	I0819 10:46:06.572228  107271 addons.go:234] Setting addon helm-tiller=true in "addons-479471"
	I0819 10:46:06.572256  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.572334  107271 addons.go:69] Setting ingress-dns=true in profile "addons-479471"
	I0819 10:46:06.572370  107271 addons.go:234] Setting addon ingress-dns=true in "addons-479471"
	I0819 10:46:06.572404  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.572026  107271 addons.go:234] Setting addon yakd=true in "addons-479471"
	I0819 10:46:06.572541  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.572555  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.572575  107271 addons.go:69] Setting volumesnapshots=true in profile "addons-479471"
	I0819 10:46:06.572580  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.572604  107271 addons.go:234] Setting addon volumesnapshots=true in "addons-479471"
	I0819 10:46:06.572604  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.572610  107271 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-479471"
	I0819 10:46:06.572628  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.572629  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.572639  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.572669  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.572682  107271 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-479471"
	I0819 10:46:06.572695  107271 addons.go:69] Setting ingress=true in profile "addons-479471"
	I0819 10:46:06.572710  107271 addons.go:234] Setting addon ingress=true in "addons-479471"
	I0819 10:46:06.572712  107271 addons.go:69] Setting gcp-auth=true in profile "addons-479471"
	I0819 10:46:06.572729  107271 mustload.go:65] Loading cluster: addons-479471
	I0819 10:46:06.572082  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.572744  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.572772  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.572929  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.572939  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.572946  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.572965  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.572965  107271 config.go:182] Loaded profile config "addons-479471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 10:46:06.572602  107271 addons.go:69] Setting default-storageclass=true in profile "addons-479471"
	I0819 10:46:06.572057  107271 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-479471"
	I0819 10:46:06.572031  107271 addons.go:69] Setting metrics-server=true in profile "addons-479471"
	I0819 10:46:06.573035  107271 addons.go:234] Setting addon metrics-server=true in "addons-479471"
	I0819 10:46:06.573007  107271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-479471"
	I0819 10:46:06.572098  107271 addons.go:69] Setting volcano=true in profile "addons-479471"
	I0819 10:46:06.572555  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.573087  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.573060  107271 addons.go:234] Setting addon volcano=true in "addons-479471"
	I0819 10:46:06.573154  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.573270  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.573302  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.573369  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.573518  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.573549  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.573572  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.573602  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.573675  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.573701  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.573747  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.573923  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.574028  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.574054  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.574075  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.574311  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.574391  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.574443  107271 out.go:177] * Verifying Kubernetes components...
	I0819 10:46:06.576108  107271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:46:06.594007  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34343
	I0819 10:46:06.594021  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I0819 10:46:06.594047  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0819 10:46:06.594024  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0819 10:46:06.594727  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0819 10:46:06.594741  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.594728  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.594729  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.594983  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.595243  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.595311  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.595318  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.595321  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.595334  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.595544  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.595561  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.595677  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.595682  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.595759  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.596095  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.596109  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.596129  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.596178  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.596310  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.596335  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.596489  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.596521  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.596623  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.596646  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.596731  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.597413  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.597464  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.598153  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.604203  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.604262  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.604307  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.604334  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.604543  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.604599  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.605110  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.605157  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.605822  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.605861  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.620220  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0819 10:46:06.620920  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.621605  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.621628  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.622055  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.622290  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.627036  107271 addons.go:234] Setting addon default-storageclass=true in "addons-479471"
	I0819 10:46:06.627090  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.627495  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.627535  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.628790  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0819 10:46:06.629474  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.630191  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.630211  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.632843  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0819 10:46:06.633491  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.633919  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I0819 10:46:06.634247  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.634308  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.634399  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.634787  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.635352  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.635405  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.635422  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.635826  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.635894  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.636476  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.636516  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.636747  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.638304  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46695
	I0819 10:46:06.638337  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.639220  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.639825  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.639847  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.640281  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.640605  107271 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 10:46:06.640944  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.640998  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.641097  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45573
	I0819 10:46:06.641623  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.641962  107271 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 10:46:06.641980  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 10:46:06.642000  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.642161  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.642186  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.642559  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.643143  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.643183  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.645397  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.645802  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.645828  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.645860  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I0819 10:46:06.646121  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.646329  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.646510  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.646695  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.647052  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.647679  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.647698  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.650929  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45195
	I0819 10:46:06.651116  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.651887  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.652280  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.652328  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.652444  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.652474  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.653040  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.653609  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.653640  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.663500  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I0819 10:46:06.664026  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0819 10:46:06.664548  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.664842  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.665472  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.665497  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.665781  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.665800  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.665861  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.666289  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.666522  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.666595  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.667187  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0819 10:46:06.667627  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.668181  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.668200  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.668526  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.669271  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.669312  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.669557  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.670272  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.671790  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 10:46:06.672414  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I0819 10:46:06.672925  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 10:46:06.672996  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.673795  107271 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 10:46:06.673818  107271 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 10:46:06.673845  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.673891  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.673909  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.674252  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34185
	I0819 10:46:06.674442  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.674811  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.674903  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.675438  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.675460  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.675883  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.676036  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 10:46:06.676159  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.677526  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.677933  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.677955  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.678255  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.678487  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.678707  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.678894  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.678930  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 10:46:06.679709  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.681566  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 10:46:06.681591  107271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:46:06.682563  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I0819 10:46:06.683176  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.683378  107271 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:46:06.683394  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:46:06.683415  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.683612  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0819 10:46:06.683700  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.683714  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.684311  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.684391  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.684614  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 10:46:06.684849  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0819 10:46:06.684925  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.684939  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.684981  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.685025  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33169
	I0819 10:46:06.685052  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I0819 10:46:06.685668  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.685668  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.686283  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.686298  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.686329  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.686332  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.686413  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.686301  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.686991  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.687010  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.687074  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.687197  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 10:46:06.687706  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.687834  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.687951  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.688230  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33007
	I0819 10:46:06.688713  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.688743  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.688940  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.689047  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.689079  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42547
	I0819 10:46:06.689222  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.689237  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.689365  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.689499  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.689631  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.690156  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 10:46:06.690199  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.690513  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.690618  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.690685  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.690698  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.690955  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.691135  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.691149  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.691497  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.691705  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.691743  107271 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 10:46:06.691770  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.691880  107271 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 10:46:06.692535  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.692573  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.692963  107271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 10:46:06.693916  107271 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 10:46:06.693927  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.693936  107271 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 10:46:06.693943  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.693960  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.694392  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.694672  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.694837  107271 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 10:46:06.694841  107271 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 10:46:06.694980  107271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 10:46:06.695002  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.697863  107271 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-479471"
	I0819 10:46:06.697926  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:06.698311  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.698355  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.698600  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.700040  107271 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 10:46:06.700064  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 10:46:06.700088  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.700739  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
	I0819 10:46:06.701635  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.702271  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.702294  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.702357  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.702761  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.702997  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.703619  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.703660  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.703917  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.704142  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.704306  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.704471  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.705168  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.705675  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.705952  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.705975  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.706253  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.706456  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.706532  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.706547  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.706635  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.706760  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.707086  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.707151  107271 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 10:46:06.707372  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.707577  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.707649  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.707978  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.708830  107271 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 10:46:06.708849  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 10:46:06.708869  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.709494  107271 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 10:46:06.710939  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37453
	I0819 10:46:06.711255  107271 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 10:46:06.711275  107271 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 10:46:06.711358  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.711548  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.712111  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.712128  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.712194  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0819 10:46:06.712250  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.712637  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.712834  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.713355  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.713916  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.713935  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.714250  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.714271  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.714702  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.714987  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.715684  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.715740  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.715946  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.716113  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.716130  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.716419  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.716464  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.716573  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.716899  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.717116  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.717332  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.717395  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.718023  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.718240  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:06.718253  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:06.718484  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:06.718497  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:06.718506  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:06.718521  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:06.718700  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:06.718712  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:06.718732  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 10:46:06.718817  107271 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 10:46:06.719569  107271 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 10:46:06.721339  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36849
	I0819 10:46:06.721440  107271 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 10:46:06.721456  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 10:46:06.721478  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.723339  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35023
	I0819 10:46:06.723768  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I0819 10:46:06.723830  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.724124  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.724206  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.724312  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.724562  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.724578  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.724640  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.724655  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.724872  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.724893  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.724967  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.725014  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.725065  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.725086  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.725422  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.725429  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.725468  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.725630  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.725636  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.725653  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.725725  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.725829  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.727620  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.728447  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.728450  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.728730  107271 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:46:06.728756  107271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:46:06.728777  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.729831  107271 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 10:46:06.730755  107271 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 10:46:06.731773  107271 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 10:46:06.731797  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 10:46:06.731824  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.732125  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.733026  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.733082  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.733206  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.733435  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.733632  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.733827  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.733898  107271 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 10:46:06.735135  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.735512  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.735538  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.735808  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.735992  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.736158  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.736327  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.736493  107271 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	W0819 10:46:06.737145  107271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60768->192.168.39.182:22: read: connection reset by peer
	I0819 10:46:06.737178  107271 retry.go:31] will retry after 267.699932ms: ssh: handshake failed: read tcp 192.168.39.1:60768->192.168.39.182:22: read: connection reset by peer
	I0819 10:46:06.738162  107271 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 10:46:06.738189  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 10:46:06.738211  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.739108  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0819 10:46:06.739383  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I0819 10:46:06.739699  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.739898  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.740530  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.740551  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.740627  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.740650  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.740988  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.740988  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.741196  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.741467  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.741552  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:06.741580  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:06.741913  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.741935  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.742097  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.742286  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.742447  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.742770  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.743255  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.744925  107271 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 10:46:06.746759  107271 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 10:46:06.746785  107271 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 10:46:06.746812  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.749920  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.750323  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.750359  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.750517  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.750730  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.750974  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.751125  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	W0819 10:46:06.760978  107271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60778->192.168.39.182:22: read: connection reset by peer
	I0819 10:46:06.761009  107271 retry.go:31] will retry after 335.188038ms: ssh: handshake failed: read tcp 192.168.39.1:60778->192.168.39.182:22: read: connection reset by peer
	I0819 10:46:06.775196  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34765
	I0819 10:46:06.775671  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:06.776216  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:06.776245  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:06.776601  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:06.776802  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:06.778574  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:06.780457  107271 out.go:177]   - Using image docker.io/busybox:stable
	I0819 10:46:06.782393  107271 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 10:46:06.784104  107271 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 10:46:06.784126  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 10:46:06.784149  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:06.787793  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.788250  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:06.788283  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:06.788528  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:06.788757  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:06.788944  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:06.789105  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:06.947288  107271 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 10:46:06.947324  107271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 10:46:07.046560  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:46:07.091227  107271 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 10:46:07.091266  107271 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 10:46:07.096402  107271 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 10:46:07.096430  107271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 10:46:07.127733  107271 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 10:46:07.127773  107271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 10:46:07.147760  107271 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 10:46:07.147795  107271 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 10:46:07.199876  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 10:46:07.200916  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 10:46:07.202296  107271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 10:46:07.202322  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 10:46:07.238043  107271 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 10:46:07.238078  107271 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 10:46:07.240782  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:46:07.276419  107271 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 10:46:07.276454  107271 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 10:46:07.281000  107271 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 10:46:07.281030  107271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 10:46:07.298363  107271 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 10:46:07.298396  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 10:46:07.331438  107271 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 10:46:07.331477  107271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 10:46:07.374807  107271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 10:46:07.374850  107271 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 10:46:07.377433  107271 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 10:46:07.377462  107271 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 10:46:07.400225  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 10:46:07.459942  107271 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 10:46:07.459971  107271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 10:46:07.466998  107271 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 10:46:07.467024  107271 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 10:46:07.477026  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 10:46:07.496574  107271 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 10:46:07.496607  107271 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 10:46:07.501087  107271 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 10:46:07.501111  107271 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 10:46:07.545003  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 10:46:07.578752  107271 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 10:46:07.578782  107271 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 10:46:07.603562  107271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 10:46:07.603587  107271 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 10:46:07.690585  107271 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 10:46:07.690624  107271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 10:46:07.693755  107271 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 10:46:07.693779  107271 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 10:46:07.709986  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 10:46:07.713479  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 10:46:07.728224  107271 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:46:07.728249  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 10:46:07.738648  107271 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.162510316s)
	I0819 10:46:07.738736  107271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:46:07.738839  107271 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.167046046s)
	I0819 10:46:07.738962  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:46:07.780215  107271 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 10:46:07.780242  107271 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 10:46:07.830536  107271 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 10:46:07.830562  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 10:46:07.860437  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 10:46:07.894184  107271 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 10:46:07.894215  107271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 10:46:07.902566  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:46:07.984126  107271 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 10:46:07.984155  107271 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 10:46:08.071564  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 10:46:08.151856  107271 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 10:46:08.151892  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 10:46:08.231382  107271 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 10:46:08.231425  107271 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 10:46:08.419282  107271 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 10:46:08.419320  107271 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 10:46:08.520015  107271 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 10:46:08.520046  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 10:46:08.620320  107271 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 10:46:08.620344  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 10:46:08.810687  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 10:46:08.906725  107271 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 10:46:08.906751  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 10:46:09.235095  107271 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 10:46:09.235132  107271 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 10:46:09.526778  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 10:46:11.779478  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.73287062s)
	I0819 10:46:11.779542  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.779559  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.779565  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.579650611s)
	I0819 10:46:11.779615  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.779633  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.779633  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.578688172s)
	I0819 10:46:11.779676  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.779694  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.779742  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.538915395s)
	I0819 10:46:11.779794  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.779808  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.780057  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:11.780079  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:11.780119  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.780131  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.780135  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:11.780142  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:11.780146  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.780150  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.780158  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.780161  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.780169  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:11.780122  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.780201  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:11.780211  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.780212  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.780219  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.780225  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:11.780239  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.780247  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.782364  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.782366  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:11.782380  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:11.782396  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:11.782403  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.782417  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:11.782385  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:11.782422  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.782436  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:11.782443  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:11.782462  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.782469  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:11.868765  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:11.868786  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:11.869226  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:11.869254  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:12.159704  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.759435153s)
	I0819 10:46:12.159809  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:12.159817  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.682752005s)
	I0819 10:46:12.159850  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.614808882s)
	I0819 10:46:12.159863  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:12.159825  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:12.159881  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:12.159891  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:12.159875  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:12.160178  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:12.160198  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:12.160199  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:12.160207  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:12.160215  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:12.160213  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:12.160223  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:12.160232  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:12.160239  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:12.160243  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:12.160253  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:12.160262  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:12.160270  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:12.160216  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:12.162055  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:12.162056  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:12.162063  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:12.162081  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:12.162086  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:12.162087  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:12.162095  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:12.162106  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:12.162107  107271 addons.go:475] Verifying addon registry=true in "addons-479471"
	I0819 10:46:12.162114  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:12.164169  107271 out.go:177] * Verifying registry addon...
	I0819 10:46:12.166103  107271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 10:46:12.237092  107271 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 10:46:12.237126  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:12.298552  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:12.298584  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:12.298897  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:12.298916  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:12.704369  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:13.172984  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:13.680869  107271 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 10:46:13.680925  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:13.683572  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:13.684099  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:13.684132  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:13.684301  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:13.684567  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:13.684777  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:13.684949  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:13.707132  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:14.205907  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:14.214961  107271 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 10:46:14.424847  107271 addons.go:234] Setting addon gcp-auth=true in "addons-479471"
	I0819 10:46:14.424981  107271 host.go:66] Checking if "addons-479471" exists ...
	I0819 10:46:14.425458  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:14.425502  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:14.440912  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0819 10:46:14.441450  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:14.442000  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:14.442030  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:14.442371  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:14.443008  107271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 10:46:14.443044  107271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 10:46:14.458269  107271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0819 10:46:14.458803  107271 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:14.459304  107271 main.go:141] libmachine: Using API Version  1
	I0819 10:46:14.459327  107271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:14.459797  107271 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:14.460017  107271 main.go:141] libmachine: (addons-479471) Calling .GetState
	I0819 10:46:14.461684  107271 main.go:141] libmachine: (addons-479471) Calling .DriverName
	I0819 10:46:14.461916  107271 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 10:46:14.461938  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHHostname
	I0819 10:46:14.464610  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:14.465006  107271 main.go:141] libmachine: (addons-479471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:68:92", ip: ""} in network mk-addons-479471: {Iface:virbr1 ExpiryTime:2024-08-19 11:45:36 +0000 UTC Type:0 Mac:52:54:00:e4:68:92 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-479471 Clientid:01:52:54:00:e4:68:92}
	I0819 10:46:14.465120  107271 main.go:141] libmachine: (addons-479471) DBG | domain addons-479471 has defined IP address 192.168.39.182 and MAC address 52:54:00:e4:68:92 in network mk-addons-479471
	I0819 10:46:14.465179  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHPort
	I0819 10:46:14.465399  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHKeyPath
	I0819 10:46:14.465546  107271 main.go:141] libmachine: (addons-479471) Calling .GetSSHUsername
	I0819 10:46:14.465712  107271 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/addons-479471/id_rsa Username:docker}
	I0819 10:46:14.672685  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:15.063268  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.353238108s)
	I0819 10:46:15.063327  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.063343  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.063341  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.349820976s)
	I0819 10:46:15.063383  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.063402  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.063416  107271 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.324428076s)
	I0819 10:46:15.063447  107271 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 10:46:15.063423  107271 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.324673699s)
	I0819 10:46:15.063539  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.203048393s)
	I0819 10:46:15.063559  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.063577  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.063670  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.161060454s)
	W0819 10:46:15.063738  107271 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 10:46:15.063770  107271 retry.go:31] will retry after 292.57033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 10:46:15.063778  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.992164781s)
	I0819 10:46:15.063822  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:15.063815  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.063850  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.063865  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.063874  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.063882  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.063851  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.063931  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.063945  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.253215351s)
	I0819 10:46:15.063961  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.063970  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.064072  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.064081  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.064078  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.064089  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.064095  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.064101  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.064109  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.064151  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:15.064175  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.064182  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.064189  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.064196  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.064565  107271 node_ready.go:35] waiting up to 6m0s for node "addons-479471" to be "Ready" ...
	I0819 10:46:15.065984  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:15.065994  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.065996  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.066007  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.066010  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.066017  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:15.066024  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.066037  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.066046  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:15.065992  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:15.066054  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:15.066339  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.066355  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.066472  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:15.066501  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.066513  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.066528  107271 addons.go:475] Verifying addon ingress=true in "addons-479471"
	I0819 10:46:15.067579  107271 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-479471 service yakd-dashboard -n yakd-dashboard
	
	I0819 10:46:15.067983  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:15.069467  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:15.069483  107271 addons.go:475] Verifying addon metrics-server=true in "addons-479471"
	I0819 10:46:15.068046  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:15.070558  107271 out.go:177] * Verifying ingress addon...
	I0819 10:46:15.072594  107271 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 10:46:15.081149  107271 node_ready.go:49] node "addons-479471" has status "Ready":"True"
	I0819 10:46:15.081174  107271 node_ready.go:38] duration metric: took 16.306038ms for node "addons-479471" to be "Ready" ...
	I0819 10:46:15.081187  107271 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:46:15.104429  107271 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 10:46:15.104463  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:15.108365  107271 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jsq5j" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.159826  107271 pod_ready.go:93] pod "coredns-6f6b679f8f-jsq5j" in "kube-system" namespace has status "Ready":"True"
	I0819 10:46:15.159862  107271 pod_ready.go:82] duration metric: took 51.461827ms for pod "coredns-6f6b679f8f-jsq5j" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.159878  107271 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wx5rr" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.168886  107271 pod_ready.go:93] pod "coredns-6f6b679f8f-wx5rr" in "kube-system" namespace has status "Ready":"True"
	I0819 10:46:15.168921  107271 pod_ready.go:82] duration metric: took 9.034646ms for pod "coredns-6f6b679f8f-wx5rr" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.168935  107271 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-479471" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.178432  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:15.200513  107271 pod_ready.go:93] pod "etcd-addons-479471" in "kube-system" namespace has status "Ready":"True"
	I0819 10:46:15.200620  107271 pod_ready.go:82] duration metric: took 31.675195ms for pod "etcd-addons-479471" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.200640  107271 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-479471" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.225268  107271 pod_ready.go:93] pod "kube-apiserver-addons-479471" in "kube-system" namespace has status "Ready":"True"
	I0819 10:46:15.225308  107271 pod_ready.go:82] duration metric: took 24.658781ms for pod "kube-apiserver-addons-479471" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.225326  107271 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-479471" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.356874  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 10:46:15.468350  107271 pod_ready.go:93] pod "kube-controller-manager-addons-479471" in "kube-system" namespace has status "Ready":"True"
	I0819 10:46:15.468385  107271 pod_ready.go:82] duration metric: took 243.04953ms for pod "kube-controller-manager-addons-479471" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.468402  107271 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ccl9p" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.568841  107271 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-479471" context rescaled to 1 replicas
	I0819 10:46:15.585350  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:15.682388  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:15.874003  107271 pod_ready.go:93] pod "kube-proxy-ccl9p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:46:15.874033  107271 pod_ready.go:82] duration metric: took 405.623129ms for pod "kube-proxy-ccl9p" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:15.874046  107271 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-479471" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:16.351153  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:16.351431  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:16.353154  107271 pod_ready.go:93] pod "kube-scheduler-addons-479471" in "kube-system" namespace has status "Ready":"True"
	I0819 10:46:16.353176  107271 pod_ready.go:82] duration metric: took 479.122888ms for pod "kube-scheduler-addons-479471" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:16.353188  107271 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace to be "Ready" ...
	I0819 10:46:16.597415  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:16.713061  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:16.855387  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.328545236s)
	I0819 10:46:16.855413  107271 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.393473937s)
	I0819 10:46:16.855451  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:16.855467  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:16.855774  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:16.855835  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:16.855852  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:16.855861  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:16.855862  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:16.856173  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:16.856196  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:16.856208  107271 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-479471"
	I0819 10:46:16.857070  107271 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 10:46:16.857915  107271 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 10:46:16.859772  107271 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 10:46:16.860595  107271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 10:46:16.861074  107271 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 10:46:16.861097  107271 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 10:46:16.879352  107271 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 10:46:16.879376  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:16.925292  107271 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 10:46:16.925323  107271 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 10:46:17.003928  107271 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 10:46:17.003953  107271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 10:46:17.048884  107271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 10:46:17.077422  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:17.170189  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:17.365986  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:17.578110  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:17.669917  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:17.764669  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.407736118s)
	I0819 10:46:17.764729  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:17.764744  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:17.765044  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:17.765066  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:17.765080  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:17.765081  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:17.765090  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:17.765378  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:17.765413  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:17.866420  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:18.077319  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:18.177880  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:18.406306  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:18.423280  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:18.554275  107271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.505323737s)
	I0819 10:46:18.554334  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:18.554346  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:18.554689  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:18.554708  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:18.554717  107271 main.go:141] libmachine: Making call to close driver server
	I0819 10:46:18.554724  107271 main.go:141] libmachine: (addons-479471) Calling .Close
	I0819 10:46:18.554991  107271 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:46:18.555010  107271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:46:18.555025  107271 main.go:141] libmachine: (addons-479471) DBG | Closing plugin on server side
	I0819 10:46:18.556868  107271 addons.go:475] Verifying addon gcp-auth=true in "addons-479471"
	I0819 10:46:18.558392  107271 out.go:177] * Verifying gcp-auth addon...
	I0819 10:46:18.560393  107271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 10:46:18.580991  107271 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 10:46:18.581017  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:18.581492  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:18.681939  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:18.867073  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:19.064891  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:19.078263  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:19.169680  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:19.368707  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:19.564198  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:19.576620  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:19.669900  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:19.866753  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:20.063681  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:20.077196  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:20.169631  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:20.365870  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:20.564870  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:20.577260  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:20.670295  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:20.860003  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:20.864976  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:21.064840  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:21.077215  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:21.170009  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:21.364731  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:21.564272  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:21.577057  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:21.670650  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:21.865402  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:22.064195  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:22.076894  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:22.169857  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:22.690460  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:22.690667  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:22.692595  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:22.693304  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:22.860560  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:22.865265  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:23.065329  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:23.076683  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:23.171036  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:23.366237  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:23.564725  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:23.579071  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:23.677231  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:23.864479  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:24.064221  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:24.076815  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:24.169710  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:24.365351  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:24.564851  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:24.577947  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:24.669469  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:24.865656  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:25.064267  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:25.076732  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:25.170381  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:25.359406  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:25.364730  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:25.564197  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:25.576821  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:25.670294  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:25.873313  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:26.064507  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:26.076801  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:26.169918  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:26.365342  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:26.564845  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:26.577464  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:26.669331  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:26.907553  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:27.064113  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:27.076559  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:27.171047  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:27.359809  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:27.366427  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:27.569058  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:27.587876  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:27.683407  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:27.865094  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:28.064523  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:28.077262  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:28.170574  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:28.364976  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:28.565228  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:28.576907  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:28.671012  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:28.866507  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:29.064158  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:29.076639  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:29.170289  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:29.363882  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:29.564171  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:29.576581  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:29.670657  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:29.858495  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:29.866024  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:30.064350  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:30.077110  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:30.169456  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:30.365789  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:30.563703  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:30.577007  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:30.670591  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:30.867793  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:31.064149  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:31.077223  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:31.169840  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:31.366765  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:31.564162  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:31.576474  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:31.669844  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:31.859584  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:31.865240  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:32.066263  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:32.078272  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:32.169440  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:32.365941  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:32.565361  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:32.577394  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:32.670406  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:33.116093  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:33.116772  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:33.116972  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:33.209334  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:33.365321  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:33.564238  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:33.576650  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:33.670570  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:33.859823  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:33.866458  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:34.064673  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:34.078131  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:34.170035  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:34.364544  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:34.564376  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:34.576743  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:34.670656  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:34.864086  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:35.064451  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:35.077072  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:35.170094  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:35.364260  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:35.563718  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:35.577442  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:35.670604  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:35.860199  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:35.865424  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:36.063770  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:36.077815  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:36.170226  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:36.364620  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:36.568965  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:36.587984  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:36.670772  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:36.865013  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:37.064308  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:37.076994  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:37.170390  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:37.364600  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:37.564878  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:37.666426  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:37.669531  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:37.866999  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:37.869616  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:38.064634  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:38.077264  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:38.170004  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:38.364518  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:38.564194  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:38.577344  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:38.670164  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:38.867101  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:39.064895  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:39.077530  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:39.172698  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:39.365648  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:39.563381  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:39.577444  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:39.670472  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:39.865252  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:40.064560  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:40.077174  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:40.184281  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:40.359551  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:40.365389  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:40.563715  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:40.577361  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:40.670607  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:40.864619  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:41.064274  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:41.077294  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:41.169681  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:41.364795  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:41.564576  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:41.579046  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:41.669657  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:41.866052  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:42.066545  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:42.078233  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:42.170100  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:42.365471  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:42.369087  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:42.564578  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:42.577874  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:42.669788  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:42.864599  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:43.064047  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:43.076161  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:43.170576  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:43.364787  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:43.564493  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:43.577661  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:43.669428  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:43.865836  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:44.244676  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:44.244751  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:44.244884  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:44.372766  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:44.376064  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:44.565331  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:44.576917  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:44.670194  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:44.864945  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:45.065463  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:45.077678  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:45.171095  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:45.365790  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:45.565589  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:45.576956  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:45.671290  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:45.864982  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:46.064713  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:46.077120  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:46.169890  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:46.365761  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:46.565247  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:46.578052  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:46.671098  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:46.861235  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:46.867590  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:47.065527  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:47.080407  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:47.173100  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:47.366237  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:47.564819  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:47.578018  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:47.669424  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:47.864895  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:48.063903  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:48.078414  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:48.171164  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:48.364683  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:48.563950  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:48.578526  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:48.670417  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:48.865284  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:49.063696  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:49.077257  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:49.170081  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:49.605085  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:49.606735  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:49.607182  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:49.607211  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:49.670904  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:49.864933  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:50.064464  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:50.077283  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:50.169569  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:50.365437  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:50.565110  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:50.576402  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:50.669874  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 10:46:50.864693  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:51.066855  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:51.077365  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:51.170618  107271 kapi.go:107] duration metric: took 39.004515758s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 10:46:51.364763  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:51.564384  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:51.576825  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:51.859680  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:51.865526  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:52.063848  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:52.078142  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:52.471993  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:52.564055  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:52.576373  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:52.864997  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:53.069207  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:53.079370  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:53.365946  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:53.565261  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:53.577437  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:53.859855  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:53.866232  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:54.064863  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:54.078400  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:54.365141  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:54.567766  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:54.577119  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:54.864346  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:55.063803  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:55.077193  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:55.366467  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:55.563572  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:55.576811  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:56.118934  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:56.121497  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:56.122213  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:56.129435  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:56.365514  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:56.564228  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:56.576906  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:56.864649  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:57.063848  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:57.077576  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:57.369241  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:57.564387  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:57.576942  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:57.864840  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:58.064256  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:58.077321  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:58.359900  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:46:58.365037  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:58.565590  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:58.576857  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:58.864130  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:59.063464  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:59.077175  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:59.365590  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:46:59.564718  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:46:59.577840  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:46:59.865335  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:00.063556  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:00.076884  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:00.365475  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:00.564749  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:00.577628  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:00.859850  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:00.866180  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:01.064613  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:01.077310  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:01.368218  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:01.564503  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:01.577595  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:01.864820  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:02.064705  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:02.077316  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:02.367167  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:02.568665  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:02.581610  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:02.860255  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:02.865436  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:03.064635  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:03.077115  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:03.364967  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:03.564337  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:03.583381  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:03.864803  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:04.066168  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:04.077620  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:04.367848  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:04.564951  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:04.577773  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:04.865517  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:05.063633  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:05.077909  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:05.359745  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:05.365927  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:05.564858  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:05.579021  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:05.867401  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:06.065092  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:06.080948  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:06.369829  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:06.565142  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:06.576801  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:06.864551  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:07.070611  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:07.076674  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:07.365472  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:07.565791  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:07.580144  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:07.860080  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:07.864883  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:08.064025  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:08.077755  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:08.364760  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:08.564505  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:08.577030  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:08.866477  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:09.063929  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:09.165759  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:09.369044  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:09.566243  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:09.578099  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:09.865367  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:10.065495  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:10.078519  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:10.362119  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:10.365249  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:10.564161  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:10.577401  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:10.868011  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:11.065856  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:11.166312  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:11.364709  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:11.564461  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:11.577119  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:11.866552  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:12.064739  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:12.078433  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:12.368270  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:12.563460  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:12.577483  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:12.873090  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:12.873328  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:13.064927  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:13.084509  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:13.604484  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:13.604795  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:13.605146  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:13.865335  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:14.064127  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:14.076847  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:14.366136  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:14.564588  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:14.578176  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:14.864602  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:15.066503  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:15.077386  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:15.359158  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:15.365177  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:15.564545  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:15.576985  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:15.865284  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:16.063427  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:16.077800  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:16.364233  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:16.565097  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:16.576877  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:16.864882  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:17.064083  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:17.076539  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:17.360826  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:17.364429  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 10:47:17.563713  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:17.577789  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:17.865657  107271 kapi.go:107] duration metric: took 1m1.005063158s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 10:47:18.064241  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:18.076827  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:18.564820  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:18.577196  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:19.065211  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:19.076626  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:19.566914  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:19.576095  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:19.859673  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:20.063443  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:20.076703  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:20.565128  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:20.576799  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:21.065388  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:21.076834  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:21.563459  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:21.576824  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:22.064015  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:22.077765  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:22.499396  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:22.564684  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:22.576955  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:23.064414  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:23.077204  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:23.563745  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:23.577147  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:24.064735  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:24.077130  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:24.892686  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:24.893574  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:24.896489  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:25.065512  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:25.077149  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:25.564228  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:25.576686  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:26.066460  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:26.078759  107271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 10:47:26.563990  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:26.577173  107271 kapi.go:107] duration metric: took 1m11.504576553s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 10:47:27.064994  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:27.360060  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:27.578302  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:28.063697  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:28.564214  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:29.064825  107271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 10:47:29.565268  107271 kapi.go:107] duration metric: took 1m11.004870612s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 10:47:29.566825  107271 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-479471 cluster.
	I0819 10:47:29.567914  107271 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 10:47:29.569175  107271 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 10:47:29.570709  107271 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 10:47:29.571949  107271 addons.go:510] duration metric: took 1m23.0000583s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns default-storageclass nvidia-device-plugin storage-provisioner-rancher helm-tiller inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 10:47:29.860292  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:32.359971  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:34.363523  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:36.858627  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:38.859389  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:41.358962  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:43.359935  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:45.859771  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:48.359881  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:50.359935  107271 pod_ready.go:103] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"False"
	I0819 10:47:51.861167  107271 pod_ready.go:93] pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace has status "Ready":"True"
	I0819 10:47:51.861198  107271 pod_ready.go:82] duration metric: took 1m35.50800227s for pod "metrics-server-8988944d9-vkpdk" in "kube-system" namespace to be "Ready" ...
	I0819 10:47:51.861208  107271 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-db589" in "kube-system" namespace to be "Ready" ...
	I0819 10:47:51.867677  107271 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-db589" in "kube-system" namespace has status "Ready":"True"
	I0819 10:47:51.867712  107271 pod_ready.go:82] duration metric: took 6.495705ms for pod "nvidia-device-plugin-daemonset-db589" in "kube-system" namespace to be "Ready" ...
	I0819 10:47:51.867754  107271 pod_ready.go:39] duration metric: took 1m36.786553846s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:47:51.867778  107271 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:47:51.867830  107271 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 10:47:51.867897  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 10:47:51.906912  107271 cri.go:89] found id: "cded472376d37bf82e079877c80180a63553d6002f8f8c50cc7bacbc36fe72ec"
	I0819 10:47:51.906936  107271 cri.go:89] found id: ""
	I0819 10:47:51.906945  107271 logs.go:276] 1 containers: [cded472376d37bf82e079877c80180a63553d6002f8f8c50cc7bacbc36fe72ec]
	I0819 10:47:51.907000  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:51.911112  107271 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 10:47:51.911177  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 10:47:51.948089  107271 cri.go:89] found id: "01339403ce0c03edd7acaf140bcffb5751ccb11fdddf1cfca0d960815c435c9b"
	I0819 10:47:51.948117  107271 cri.go:89] found id: ""
	I0819 10:47:51.948127  107271 logs.go:276] 1 containers: [01339403ce0c03edd7acaf140bcffb5751ccb11fdddf1cfca0d960815c435c9b]
	I0819 10:47:51.948179  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:51.952161  107271 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 10:47:51.952243  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 10:47:52.012064  107271 cri.go:89] found id: "1692a63fdbb4f85a4e630234eb05e4b52afb0d5112c2e96472de8ddadddfa4d5"
	I0819 10:47:52.012087  107271 cri.go:89] found id: ""
	I0819 10:47:52.012096  107271 logs.go:276] 1 containers: [1692a63fdbb4f85a4e630234eb05e4b52afb0d5112c2e96472de8ddadddfa4d5]
	I0819 10:47:52.012156  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:52.022431  107271 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 10:47:52.022517  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 10:47:52.061368  107271 cri.go:89] found id: "cdad6d62849c081464050d1f0e74f91d7f2571f00db5c672a356592eca0677e5"
	I0819 10:47:52.061401  107271 cri.go:89] found id: ""
	I0819 10:47:52.061413  107271 logs.go:276] 1 containers: [cdad6d62849c081464050d1f0e74f91d7f2571f00db5c672a356592eca0677e5]
	I0819 10:47:52.061473  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:52.065998  107271 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 10:47:52.066060  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 10:47:52.101688  107271 cri.go:89] found id: "7606ed793b98054581620f4e86a6d1f638cb31c56b140811fbcde53e9ab9ab07"
	I0819 10:47:52.101710  107271 cri.go:89] found id: ""
	I0819 10:47:52.101717  107271 logs.go:276] 1 containers: [7606ed793b98054581620f4e86a6d1f638cb31c56b140811fbcde53e9ab9ab07]
	I0819 10:47:52.101767  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:52.105607  107271 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 10:47:52.105688  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 10:47:52.148141  107271 cri.go:89] found id: "db174ae2f931ceb47d4096945c7bb8e6694e2a88ed9e14b6223a860b476448bd"
	I0819 10:47:52.148164  107271 cri.go:89] found id: ""
	I0819 10:47:52.148174  107271 logs.go:276] 1 containers: [db174ae2f931ceb47d4096945c7bb8e6694e2a88ed9e14b6223a860b476448bd]
	I0819 10:47:52.148288  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:52.152279  107271 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 10:47:52.152355  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 10:47:52.194791  107271 cri.go:89] found id: ""
	I0819 10:47:52.194817  107271 logs.go:276] 0 containers: []
	W0819 10:47:52.194825  107271 logs.go:278] No container was found matching "kindnet"
	I0819 10:47:52.194834  107271 logs.go:123] Gathering logs for dmesg ...
	I0819 10:47:52.194849  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 10:47:52.208905  107271 logs.go:123] Gathering logs for kube-apiserver [cded472376d37bf82e079877c80180a63553d6002f8f8c50cc7bacbc36fe72ec] ...
	I0819 10:47:52.208934  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cded472376d37bf82e079877c80180a63553d6002f8f8c50cc7bacbc36fe72ec"
	I0819 10:47:52.256827  107271 logs.go:123] Gathering logs for kube-controller-manager [db174ae2f931ceb47d4096945c7bb8e6694e2a88ed9e14b6223a860b476448bd] ...
	I0819 10:47:52.256862  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db174ae2f931ceb47d4096945c7bb8e6694e2a88ed9e14b6223a860b476448bd"
	I0819 10:47:52.316938  107271 logs.go:123] Gathering logs for CRI-O ...
	I0819 10:47:52.316975  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 10:47:53.400028  107271 logs.go:123] Gathering logs for kubelet ...
	I0819 10:47:53.400081  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 10:47:53.490373  107271 logs.go:123] Gathering logs for etcd [01339403ce0c03edd7acaf140bcffb5751ccb11fdddf1cfca0d960815c435c9b] ...
	I0819 10:47:53.490424  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01339403ce0c03edd7acaf140bcffb5751ccb11fdddf1cfca0d960815c435c9b"
	I0819 10:47:53.553574  107271 logs.go:123] Gathering logs for coredns [1692a63fdbb4f85a4e630234eb05e4b52afb0d5112c2e96472de8ddadddfa4d5] ...
	I0819 10:47:53.553635  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1692a63fdbb4f85a4e630234eb05e4b52afb0d5112c2e96472de8ddadddfa4d5"
	I0819 10:47:53.593817  107271 logs.go:123] Gathering logs for kube-scheduler [cdad6d62849c081464050d1f0e74f91d7f2571f00db5c672a356592eca0677e5] ...
	I0819 10:47:53.593863  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdad6d62849c081464050d1f0e74f91d7f2571f00db5c672a356592eca0677e5"
	I0819 10:47:53.640297  107271 logs.go:123] Gathering logs for kube-proxy [7606ed793b98054581620f4e86a6d1f638cb31c56b140811fbcde53e9ab9ab07] ...
	I0819 10:47:53.640340  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7606ed793b98054581620f4e86a6d1f638cb31c56b140811fbcde53e9ab9ab07"
	I0819 10:47:53.686792  107271 logs.go:123] Gathering logs for container status ...
	I0819 10:47:53.686844  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 10:47:53.744102  107271 logs.go:123] Gathering logs for describe nodes ...
	I0819 10:47:53.744148  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 10:47:56.405571  107271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:47:56.423912  107271 api_server.go:72] duration metric: took 1m49.852052942s to wait for apiserver process to appear ...
	I0819 10:47:56.423957  107271 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:47:56.424006  107271 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 10:47:56.424078  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 10:47:56.475657  107271 cri.go:89] found id: "cded472376d37bf82e079877c80180a63553d6002f8f8c50cc7bacbc36fe72ec"
	I0819 10:47:56.475693  107271 cri.go:89] found id: ""
	I0819 10:47:56.475704  107271 logs.go:276] 1 containers: [cded472376d37bf82e079877c80180a63553d6002f8f8c50cc7bacbc36fe72ec]
	I0819 10:47:56.475795  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:56.480035  107271 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 10:47:56.480118  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 10:47:56.519991  107271 cri.go:89] found id: "01339403ce0c03edd7acaf140bcffb5751ccb11fdddf1cfca0d960815c435c9b"
	I0819 10:47:56.520020  107271 cri.go:89] found id: ""
	I0819 10:47:56.520030  107271 logs.go:276] 1 containers: [01339403ce0c03edd7acaf140bcffb5751ccb11fdddf1cfca0d960815c435c9b]
	I0819 10:47:56.520095  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:56.524342  107271 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 10:47:56.524407  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 10:47:56.562169  107271 cri.go:89] found id: "1692a63fdbb4f85a4e630234eb05e4b52afb0d5112c2e96472de8ddadddfa4d5"
	I0819 10:47:56.562194  107271 cri.go:89] found id: ""
	I0819 10:47:56.562203  107271 logs.go:276] 1 containers: [1692a63fdbb4f85a4e630234eb05e4b52afb0d5112c2e96472de8ddadddfa4d5]
	I0819 10:47:56.562270  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:56.566661  107271 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 10:47:56.566730  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 10:47:56.606729  107271 cri.go:89] found id: "cdad6d62849c081464050d1f0e74f91d7f2571f00db5c672a356592eca0677e5"
	I0819 10:47:56.606761  107271 cri.go:89] found id: ""
	I0819 10:47:56.606772  107271 logs.go:276] 1 containers: [cdad6d62849c081464050d1f0e74f91d7f2571f00db5c672a356592eca0677e5]
	I0819 10:47:56.606839  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:56.611241  107271 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 10:47:56.611324  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 10:47:56.657675  107271 cri.go:89] found id: "7606ed793b98054581620f4e86a6d1f638cb31c56b140811fbcde53e9ab9ab07"
	I0819 10:47:56.657704  107271 cri.go:89] found id: ""
	I0819 10:47:56.657714  107271 logs.go:276] 1 containers: [7606ed793b98054581620f4e86a6d1f638cb31c56b140811fbcde53e9ab9ab07]
	I0819 10:47:56.657788  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:56.662802  107271 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 10:47:56.662891  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 10:47:56.700487  107271 cri.go:89] found id: "db174ae2f931ceb47d4096945c7bb8e6694e2a88ed9e14b6223a860b476448bd"
	I0819 10:47:56.700514  107271 cri.go:89] found id: ""
	I0819 10:47:56.700522  107271 logs.go:276] 1 containers: [db174ae2f931ceb47d4096945c7bb8e6694e2a88ed9e14b6223a860b476448bd]
	I0819 10:47:56.700575  107271 ssh_runner.go:195] Run: which crictl
	I0819 10:47:56.704712  107271 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 10:47:56.704793  107271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 10:47:56.742212  107271 cri.go:89] found id: ""
	I0819 10:47:56.742245  107271 logs.go:276] 0 containers: []
	W0819 10:47:56.742254  107271 logs.go:278] No container was found matching "kindnet"
	I0819 10:47:56.742263  107271 logs.go:123] Gathering logs for describe nodes ...
	I0819 10:47:56.742277  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 10:47:56.854075  107271 logs.go:123] Gathering logs for kube-apiserver [cded472376d37bf82e079877c80180a63553d6002f8f8c50cc7bacbc36fe72ec] ...
	I0819 10:47:56.854112  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cded472376d37bf82e079877c80180a63553d6002f8f8c50cc7bacbc36fe72ec"
	I0819 10:47:56.899588  107271 logs.go:123] Gathering logs for coredns [1692a63fdbb4f85a4e630234eb05e4b52afb0d5112c2e96472de8ddadddfa4d5] ...
	I0819 10:47:56.899627  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1692a63fdbb4f85a4e630234eb05e4b52afb0d5112c2e96472de8ddadddfa4d5"
	I0819 10:47:56.936913  107271 logs.go:123] Gathering logs for kube-scheduler [cdad6d62849c081464050d1f0e74f91d7f2571f00db5c672a356592eca0677e5] ...
	I0819 10:47:56.936945  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdad6d62849c081464050d1f0e74f91d7f2571f00db5c672a356592eca0677e5"
	I0819 10:47:56.981481  107271 logs.go:123] Gathering logs for CRI-O ...
	I0819 10:47:56.981520  107271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-479471 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 node stop m02 -v=7 --alsologtostderr
E0819 11:33:55.842672  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:34:16.324954  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:34:57.287352  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.473708698s)

                                                
                                                
-- stdout --
	* Stopping node "ha-503856-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:33:50.737531  125357 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:33:50.737810  125357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:50.737820  125357 out.go:358] Setting ErrFile to fd 2...
	I0819 11:33:50.737824  125357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:50.738045  125357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:33:50.738346  125357 mustload.go:65] Loading cluster: ha-503856
	I0819 11:33:50.738739  125357 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:33:50.738757  125357 stop.go:39] StopHost: ha-503856-m02
	I0819 11:33:50.739164  125357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:33:50.739216  125357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:33:50.755141  125357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0819 11:33:50.755678  125357 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:33:50.756289  125357 main.go:141] libmachine: Using API Version  1
	I0819 11:33:50.756314  125357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:33:50.756690  125357 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:33:50.759006  125357 out.go:177] * Stopping node "ha-503856-m02"  ...
	I0819 11:33:50.760722  125357 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 11:33:50.760771  125357 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:33:50.761111  125357 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 11:33:50.761135  125357 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:33:50.764198  125357 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:33:50.764838  125357 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:33:50.764882  125357 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:33:50.764997  125357 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:33:50.765184  125357 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:33:50.765332  125357 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:33:50.765472  125357 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:33:50.850803  125357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 11:33:50.905047  125357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 11:33:50.958804  125357 main.go:141] libmachine: Stopping "ha-503856-m02"...
	I0819 11:33:50.958842  125357 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:33:50.960474  125357 main.go:141] libmachine: (ha-503856-m02) Calling .Stop
	I0819 11:33:50.964083  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 0/120
	I0819 11:33:51.965488  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 1/120
	I0819 11:33:52.966903  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 2/120
	I0819 11:33:53.968525  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 3/120
	I0819 11:33:54.970486  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 4/120
	I0819 11:33:55.972609  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 5/120
	I0819 11:33:56.974176  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 6/120
	I0819 11:33:57.975608  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 7/120
	I0819 11:33:58.976934  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 8/120
	I0819 11:33:59.978931  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 9/120
	I0819 11:34:00.981200  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 10/120
	I0819 11:34:01.982427  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 11/120
	I0819 11:34:02.983815  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 12/120
	I0819 11:34:03.985209  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 13/120
	I0819 11:34:04.986771  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 14/120
	I0819 11:34:05.989093  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 15/120
	I0819 11:34:06.990560  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 16/120
	I0819 11:34:07.992058  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 17/120
	I0819 11:34:08.994242  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 18/120
	I0819 11:34:09.995698  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 19/120
	I0819 11:34:10.997330  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 20/120
	I0819 11:34:11.998763  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 21/120
	I0819 11:34:13.000263  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 22/120
	I0819 11:34:14.001743  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 23/120
	I0819 11:34:15.003231  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 24/120
	I0819 11:34:16.005333  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 25/120
	I0819 11:34:17.006960  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 26/120
	I0819 11:34:18.008342  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 27/120
	I0819 11:34:19.010249  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 28/120
	I0819 11:34:20.011473  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 29/120
	I0819 11:34:21.013656  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 30/120
	I0819 11:34:22.015162  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 31/120
	I0819 11:34:23.016692  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 32/120
	I0819 11:34:24.018695  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 33/120
	I0819 11:34:25.020403  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 34/120
	I0819 11:34:26.022410  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 35/120
	I0819 11:34:27.023686  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 36/120
	I0819 11:34:28.025541  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 37/120
	I0819 11:34:29.027176  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 38/120
	I0819 11:34:30.028724  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 39/120
	I0819 11:34:31.030313  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 40/120
	I0819 11:34:32.031565  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 41/120
	I0819 11:34:33.032887  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 42/120
	I0819 11:34:34.034399  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 43/120
	I0819 11:34:35.035778  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 44/120
	I0819 11:34:36.037363  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 45/120
	I0819 11:34:37.038873  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 46/120
	I0819 11:34:38.040396  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 47/120
	I0819 11:34:39.042383  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 48/120
	I0819 11:34:40.043767  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 49/120
	I0819 11:34:41.045202  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 50/120
	I0819 11:34:42.046869  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 51/120
	I0819 11:34:43.049343  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 52/120
	I0819 11:34:44.050612  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 53/120
	I0819 11:34:45.052015  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 54/120
	I0819 11:34:46.054186  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 55/120
	I0819 11:34:47.055721  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 56/120
	I0819 11:34:48.057155  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 57/120
	I0819 11:34:49.058722  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 58/120
	I0819 11:34:50.060275  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 59/120
	I0819 11:34:51.062098  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 60/120
	I0819 11:34:52.063563  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 61/120
	I0819 11:34:53.065013  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 62/120
	I0819 11:34:54.066557  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 63/120
	I0819 11:34:55.068016  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 64/120
	I0819 11:34:56.070198  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 65/120
	I0819 11:34:57.071764  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 66/120
	I0819 11:34:58.073295  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 67/120
	I0819 11:34:59.074958  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 68/120
	I0819 11:35:00.076291  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 69/120
	I0819 11:35:01.078325  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 70/120
	I0819 11:35:02.080407  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 71/120
	I0819 11:35:03.082139  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 72/120
	I0819 11:35:04.083586  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 73/120
	I0819 11:35:05.085336  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 74/120
	I0819 11:35:06.087344  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 75/120
	I0819 11:35:07.088568  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 76/120
	I0819 11:35:08.090041  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 77/120
	I0819 11:35:09.091379  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 78/120
	I0819 11:35:10.092804  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 79/120
	I0819 11:35:11.094729  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 80/120
	I0819 11:35:12.096373  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 81/120
	I0819 11:35:13.097946  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 82/120
	I0819 11:35:14.100273  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 83/120
	I0819 11:35:15.102027  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 84/120
	I0819 11:35:16.104147  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 85/120
	I0819 11:35:17.105540  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 86/120
	I0819 11:35:18.107067  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 87/120
	I0819 11:35:19.108458  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 88/120
	I0819 11:35:20.110275  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 89/120
	I0819 11:35:21.112611  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 90/120
	I0819 11:35:22.114043  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 91/120
	I0819 11:35:23.115535  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 92/120
	I0819 11:35:24.116778  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 93/120
	I0819 11:35:25.118440  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 94/120
	I0819 11:35:26.120819  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 95/120
	I0819 11:35:27.122577  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 96/120
	I0819 11:35:28.124311  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 97/120
	I0819 11:35:29.126515  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 98/120
	I0819 11:35:30.127961  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 99/120
	I0819 11:35:31.130565  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 100/120
	I0819 11:35:32.132277  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 101/120
	I0819 11:35:33.133928  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 102/120
	I0819 11:35:34.135524  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 103/120
	I0819 11:35:35.137587  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 104/120
	I0819 11:35:36.139105  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 105/120
	I0819 11:35:37.140722  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 106/120
	I0819 11:35:38.142487  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 107/120
	I0819 11:35:39.144506  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 108/120
	I0819 11:35:40.146421  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 109/120
	I0819 11:35:41.148504  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 110/120
	I0819 11:35:42.150659  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 111/120
	I0819 11:35:43.152392  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 112/120
	I0819 11:35:44.153903  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 113/120
	I0819 11:35:45.155438  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 114/120
	I0819 11:35:46.157177  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 115/120
	I0819 11:35:47.159380  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 116/120
	I0819 11:35:48.160870  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 117/120
	I0819 11:35:49.162296  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 118/120
	I0819 11:35:50.163778  125357 main.go:141] libmachine: (ha-503856-m02) Waiting for machine to stop 119/120
	I0819 11:35:51.165129  125357 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 11:35:51.165295  125357 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-503856 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 3 (18.98410277s)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:35:51.213771  125786 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:35:51.213914  125786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:35:51.213923  125786 out.go:358] Setting ErrFile to fd 2...
	I0819 11:35:51.213927  125786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:35:51.214170  125786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:35:51.214442  125786 out.go:352] Setting JSON to false
	I0819 11:35:51.214474  125786 mustload.go:65] Loading cluster: ha-503856
	I0819 11:35:51.214804  125786 notify.go:220] Checking for updates...
	I0819 11:35:51.215949  125786 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:35:51.216016  125786 status.go:255] checking status of ha-503856 ...
	I0819 11:35:51.216542  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:35:51.216581  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:35:51.232819  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0819 11:35:51.233336  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:35:51.234027  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:35:51.234055  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:35:51.234443  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:35:51.234688  125786 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:35:51.236695  125786 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:35:51.236719  125786 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:35:51.237011  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:35:51.237058  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:35:51.252344  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I0819 11:35:51.252876  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:35:51.253439  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:35:51.253462  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:35:51.253792  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:35:51.254035  125786 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:35:51.257250  125786 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:35:51.257742  125786 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:35:51.257778  125786 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:35:51.257997  125786 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:35:51.258307  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:35:51.258343  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:35:51.275181  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34513
	I0819 11:35:51.275626  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:35:51.276123  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:35:51.276148  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:35:51.276508  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:35:51.276727  125786 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:35:51.276917  125786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:35:51.276944  125786 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:35:51.280336  125786 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:35:51.280727  125786 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:35:51.280764  125786 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:35:51.280936  125786 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:35:51.281169  125786 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:35:51.281325  125786 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:35:51.281524  125786 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:35:51.364051  125786 ssh_runner.go:195] Run: systemctl --version
	I0819 11:35:51.370483  125786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:35:51.386756  125786 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:35:51.386792  125786 api_server.go:166] Checking apiserver status ...
	I0819 11:35:51.386832  125786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:35:51.402073  125786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:35:51.414862  125786 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:35:51.414926  125786 ssh_runner.go:195] Run: ls
	I0819 11:35:51.420893  125786 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:35:51.425448  125786 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:35:51.425495  125786 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:35:51.425508  125786 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:35:51.425530  125786 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:35:51.425860  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:35:51.425888  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:35:51.441343  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0819 11:35:51.441837  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:35:51.442271  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:35:51.442293  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:35:51.442605  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:35:51.442798  125786 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:35:51.444450  125786 status.go:330] ha-503856-m02 host status = "Running" (err=<nil>)
	I0819 11:35:51.444469  125786 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:35:51.444787  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:35:51.444837  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:35:51.460007  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0819 11:35:51.460487  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:35:51.460967  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:35:51.460987  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:35:51.461274  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:35:51.461436  125786 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:35:51.464069  125786 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:35:51.464600  125786 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:35:51.464631  125786 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:35:51.464800  125786 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:35:51.465094  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:35:51.465140  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:35:51.482107  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0819 11:35:51.482543  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:35:51.483073  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:35:51.483096  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:35:51.483409  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:35:51.483600  125786 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:35:51.483851  125786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:35:51.483872  125786 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:35:51.486447  125786 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:35:51.486825  125786 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:35:51.486854  125786 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:35:51.486981  125786 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:35:51.487164  125786 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:35:51.487319  125786 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:35:51.487455  125786 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	W0819 11:36:09.803939  125786 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:09.804048  125786 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0819 11:36:09.804070  125786 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:09.804078  125786 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 11:36:09.804096  125786 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:09.804106  125786 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:36:09.804414  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:09.804474  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:09.820356  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32823
	I0819 11:36:09.820811  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:09.821253  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:36:09.821278  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:09.821655  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:09.821882  125786 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:36:09.823601  125786 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:36:09.823622  125786 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:09.824001  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:09.824043  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:09.839570  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0819 11:36:09.840031  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:09.840527  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:36:09.840549  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:09.840916  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:09.841139  125786 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:36:09.844206  125786 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:09.844687  125786 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:09.844715  125786 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:09.844921  125786 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:09.845264  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:09.845307  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:09.860203  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I0819 11:36:09.860679  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:09.861179  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:36:09.861199  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:09.861506  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:09.861688  125786 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:36:09.861917  125786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:09.861937  125786 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:36:09.864866  125786 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:09.865303  125786 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:09.865322  125786 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:09.865490  125786 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:36:09.865669  125786 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:36:09.865809  125786 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:36:09.865946  125786 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:36:09.947182  125786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:09.961594  125786 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:09.961622  125786 api_server.go:166] Checking apiserver status ...
	I0819 11:36:09.961653  125786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:09.975307  125786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:36:09.985269  125786 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:09.985325  125786 ssh_runner.go:195] Run: ls
	I0819 11:36:09.989578  125786 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:09.994009  125786 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:09.994037  125786 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:36:09.994045  125786 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:09.994063  125786 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:36:09.994350  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:09.994373  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:10.009873  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I0819 11:36:10.010398  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:10.010945  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:36:10.010965  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:10.011369  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:10.011556  125786 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:36:10.013147  125786 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:36:10.013165  125786 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:10.013445  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:10.013468  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:10.029026  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0819 11:36:10.029450  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:10.029886  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:36:10.029914  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:10.030208  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:10.030380  125786 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:36:10.033426  125786 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:10.034093  125786 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:10.034123  125786 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:10.034270  125786 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:10.034669  125786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:10.034719  125786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:10.049823  125786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0819 11:36:10.050283  125786 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:10.050811  125786 main.go:141] libmachine: Using API Version  1
	I0819 11:36:10.050850  125786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:10.051238  125786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:10.051451  125786 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:36:10.051672  125786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:10.051692  125786 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:36:10.054337  125786 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:10.054785  125786 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:10.054810  125786 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:10.054937  125786 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:36:10.055100  125786 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:36:10.055267  125786 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:36:10.055405  125786 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:36:10.134805  125786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:10.148835  125786 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-503856 -n ha-503856
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-503856 logs -n 25: (1.35376681s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856:/home/docker/cp-test_ha-503856-m03_ha-503856.txt                       |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856 sudo cat                                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856.txt                                 |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m02:/home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m02 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04:/home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m04 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp testdata/cp-test.txt                                                | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856:/home/docker/cp-test_ha-503856-m04_ha-503856.txt                       |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856 sudo cat                                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856.txt                                 |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m02:/home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m02 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03:/home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m03 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-503856 node stop m02 -v=7                                                     | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:29:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:29:25.023300  121308 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:25.023403  121308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:25.023407  121308 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:25.023411  121308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:25.023582  121308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:29:25.024191  121308 out.go:352] Setting JSON to false
	I0819 11:29:25.025110  121308 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4311,"bootTime":1724062654,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:29:25.025180  121308 start.go:139] virtualization: kvm guest
	I0819 11:29:25.027070  121308 out.go:177] * [ha-503856] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:29:25.028243  121308 notify.go:220] Checking for updates...
	I0819 11:29:25.028266  121308 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:29:25.029648  121308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:25.031060  121308 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:29:25.032384  121308 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:29:25.033691  121308 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:29:25.034902  121308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:25.036183  121308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:25.073335  121308 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 11:29:25.074656  121308 start.go:297] selected driver: kvm2
	I0819 11:29:25.074678  121308 start.go:901] validating driver "kvm2" against <nil>
	I0819 11:29:25.074695  121308 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:25.075514  121308 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:25.075622  121308 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 11:29:25.092588  121308 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 11:29:25.092642  121308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:29:25.092869  121308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:29:25.092924  121308 cni.go:84] Creating CNI manager for ""
	I0819 11:29:25.092932  121308 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 11:29:25.092940  121308 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:29:25.092984  121308 start.go:340] cluster config:
	{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:25.093092  121308 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:25.094757  121308 out.go:177] * Starting "ha-503856" primary control-plane node in "ha-503856" cluster
	I0819 11:29:25.096077  121308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:29:25.096125  121308 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:29:25.096140  121308 cache.go:56] Caching tarball of preloaded images
	I0819 11:29:25.096238  121308 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:29:25.096250  121308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:29:25.096572  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:29:25.096596  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json: {Name:mkb252db29952c96b64f97f7f38d69e55e2baf9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:25.096771  121308 start.go:360] acquireMachinesLock for ha-503856: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:25.096807  121308 start.go:364] duration metric: took 20.687µs to acquireMachinesLock for "ha-503856"
	I0819 11:29:25.096831  121308 start.go:93] Provisioning new machine with config: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:29:25.096907  121308 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 11:29:25.098381  121308 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:29:25.098537  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:29:25.098582  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:29:25.116025  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0819 11:29:25.116529  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:29:25.117139  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:29:25.117161  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:29:25.117560  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:29:25.117750  121308 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:29:25.117875  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:25.118004  121308 start.go:159] libmachine.API.Create for "ha-503856" (driver="kvm2")
	I0819 11:29:25.118032  121308 client.go:168] LocalClient.Create starting
	I0819 11:29:25.118060  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 11:29:25.118090  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:25.118104  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:25.118160  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 11:29:25.118180  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:25.118194  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:25.118209  121308 main.go:141] libmachine: Running pre-create checks...
	I0819 11:29:25.118216  121308 main.go:141] libmachine: (ha-503856) Calling .PreCreateCheck
	I0819 11:29:25.118509  121308 main.go:141] libmachine: (ha-503856) Calling .GetConfigRaw
	I0819 11:29:25.118863  121308 main.go:141] libmachine: Creating machine...
	I0819 11:29:25.118876  121308 main.go:141] libmachine: (ha-503856) Calling .Create
	I0819 11:29:25.119005  121308 main.go:141] libmachine: (ha-503856) Creating KVM machine...
	I0819 11:29:25.120328  121308 main.go:141] libmachine: (ha-503856) DBG | found existing default KVM network
	I0819 11:29:25.121251  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.121110  121331 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c00}
	I0819 11:29:25.121321  121308 main.go:141] libmachine: (ha-503856) DBG | created network xml: 
	I0819 11:29:25.121347  121308 main.go:141] libmachine: (ha-503856) DBG | <network>
	I0819 11:29:25.121363  121308 main.go:141] libmachine: (ha-503856) DBG |   <name>mk-ha-503856</name>
	I0819 11:29:25.121379  121308 main.go:141] libmachine: (ha-503856) DBG |   <dns enable='no'/>
	I0819 11:29:25.121395  121308 main.go:141] libmachine: (ha-503856) DBG |   
	I0819 11:29:25.121408  121308 main.go:141] libmachine: (ha-503856) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 11:29:25.121418  121308 main.go:141] libmachine: (ha-503856) DBG |     <dhcp>
	I0819 11:29:25.121426  121308 main.go:141] libmachine: (ha-503856) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 11:29:25.121434  121308 main.go:141] libmachine: (ha-503856) DBG |     </dhcp>
	I0819 11:29:25.121439  121308 main.go:141] libmachine: (ha-503856) DBG |   </ip>
	I0819 11:29:25.121444  121308 main.go:141] libmachine: (ha-503856) DBG |   
	I0819 11:29:25.121452  121308 main.go:141] libmachine: (ha-503856) DBG | </network>
	I0819 11:29:25.121476  121308 main.go:141] libmachine: (ha-503856) DBG | 
	I0819 11:29:25.126606  121308 main.go:141] libmachine: (ha-503856) DBG | trying to create private KVM network mk-ha-503856 192.168.39.0/24...
	I0819 11:29:25.194625  121308 main.go:141] libmachine: (ha-503856) DBG | private KVM network mk-ha-503856 192.168.39.0/24 created
	I0819 11:29:25.194705  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.194577  121331 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:29:25.194752  121308 main.go:141] libmachine: (ha-503856) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856 ...
	I0819 11:29:25.194774  121308 main.go:141] libmachine: (ha-503856) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:29:25.194792  121308 main.go:141] libmachine: (ha-503856) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:29:25.459397  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.459252  121331 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa...
	I0819 11:29:25.646269  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.646148  121331 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/ha-503856.rawdisk...
	I0819 11:29:25.646294  121308 main.go:141] libmachine: (ha-503856) DBG | Writing magic tar header
	I0819 11:29:25.646305  121308 main.go:141] libmachine: (ha-503856) DBG | Writing SSH key tar header
	I0819 11:29:25.646313  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.646267  121331 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856 ...
	I0819 11:29:25.646390  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856
	I0819 11:29:25.646415  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856 (perms=drwx------)
	I0819 11:29:25.646427  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 11:29:25.646438  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 11:29:25.646480  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 11:29:25.646512  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 11:29:25.646526  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:29:25.646538  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 11:29:25.646562  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 11:29:25.646582  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 11:29:25.646589  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 11:29:25.646602  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins
	I0819 11:29:25.646619  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home
	I0819 11:29:25.646627  121308 main.go:141] libmachine: (ha-503856) Creating domain...
	I0819 11:29:25.646642  121308 main.go:141] libmachine: (ha-503856) DBG | Skipping /home - not owner
	I0819 11:29:25.647557  121308 main.go:141] libmachine: (ha-503856) define libvirt domain using xml: 
	I0819 11:29:25.647582  121308 main.go:141] libmachine: (ha-503856) <domain type='kvm'>
	I0819 11:29:25.647593  121308 main.go:141] libmachine: (ha-503856)   <name>ha-503856</name>
	I0819 11:29:25.647601  121308 main.go:141] libmachine: (ha-503856)   <memory unit='MiB'>2200</memory>
	I0819 11:29:25.647609  121308 main.go:141] libmachine: (ha-503856)   <vcpu>2</vcpu>
	I0819 11:29:25.647616  121308 main.go:141] libmachine: (ha-503856)   <features>
	I0819 11:29:25.647624  121308 main.go:141] libmachine: (ha-503856)     <acpi/>
	I0819 11:29:25.647631  121308 main.go:141] libmachine: (ha-503856)     <apic/>
	I0819 11:29:25.647639  121308 main.go:141] libmachine: (ha-503856)     <pae/>
	I0819 11:29:25.647650  121308 main.go:141] libmachine: (ha-503856)     
	I0819 11:29:25.647665  121308 main.go:141] libmachine: (ha-503856)   </features>
	I0819 11:29:25.647681  121308 main.go:141] libmachine: (ha-503856)   <cpu mode='host-passthrough'>
	I0819 11:29:25.647703  121308 main.go:141] libmachine: (ha-503856)   
	I0819 11:29:25.647732  121308 main.go:141] libmachine: (ha-503856)   </cpu>
	I0819 11:29:25.647742  121308 main.go:141] libmachine: (ha-503856)   <os>
	I0819 11:29:25.647753  121308 main.go:141] libmachine: (ha-503856)     <type>hvm</type>
	I0819 11:29:25.647763  121308 main.go:141] libmachine: (ha-503856)     <boot dev='cdrom'/>
	I0819 11:29:25.647775  121308 main.go:141] libmachine: (ha-503856)     <boot dev='hd'/>
	I0819 11:29:25.647798  121308 main.go:141] libmachine: (ha-503856)     <bootmenu enable='no'/>
	I0819 11:29:25.647813  121308 main.go:141] libmachine: (ha-503856)   </os>
	I0819 11:29:25.647827  121308 main.go:141] libmachine: (ha-503856)   <devices>
	I0819 11:29:25.647844  121308 main.go:141] libmachine: (ha-503856)     <disk type='file' device='cdrom'>
	I0819 11:29:25.647863  121308 main.go:141] libmachine: (ha-503856)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/boot2docker.iso'/>
	I0819 11:29:25.647874  121308 main.go:141] libmachine: (ha-503856)       <target dev='hdc' bus='scsi'/>
	I0819 11:29:25.647886  121308 main.go:141] libmachine: (ha-503856)       <readonly/>
	I0819 11:29:25.647896  121308 main.go:141] libmachine: (ha-503856)     </disk>
	I0819 11:29:25.647910  121308 main.go:141] libmachine: (ha-503856)     <disk type='file' device='disk'>
	I0819 11:29:25.647929  121308 main.go:141] libmachine: (ha-503856)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 11:29:25.647945  121308 main.go:141] libmachine: (ha-503856)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/ha-503856.rawdisk'/>
	I0819 11:29:25.647956  121308 main.go:141] libmachine: (ha-503856)       <target dev='hda' bus='virtio'/>
	I0819 11:29:25.647963  121308 main.go:141] libmachine: (ha-503856)     </disk>
	I0819 11:29:25.647974  121308 main.go:141] libmachine: (ha-503856)     <interface type='network'>
	I0819 11:29:25.647986  121308 main.go:141] libmachine: (ha-503856)       <source network='mk-ha-503856'/>
	I0819 11:29:25.648000  121308 main.go:141] libmachine: (ha-503856)       <model type='virtio'/>
	I0819 11:29:25.648012  121308 main.go:141] libmachine: (ha-503856)     </interface>
	I0819 11:29:25.648023  121308 main.go:141] libmachine: (ha-503856)     <interface type='network'>
	I0819 11:29:25.648035  121308 main.go:141] libmachine: (ha-503856)       <source network='default'/>
	I0819 11:29:25.648046  121308 main.go:141] libmachine: (ha-503856)       <model type='virtio'/>
	I0819 11:29:25.648054  121308 main.go:141] libmachine: (ha-503856)     </interface>
	I0819 11:29:25.648065  121308 main.go:141] libmachine: (ha-503856)     <serial type='pty'>
	I0819 11:29:25.648074  121308 main.go:141] libmachine: (ha-503856)       <target port='0'/>
	I0819 11:29:25.648086  121308 main.go:141] libmachine: (ha-503856)     </serial>
	I0819 11:29:25.648096  121308 main.go:141] libmachine: (ha-503856)     <console type='pty'>
	I0819 11:29:25.648105  121308 main.go:141] libmachine: (ha-503856)       <target type='serial' port='0'/>
	I0819 11:29:25.648125  121308 main.go:141] libmachine: (ha-503856)     </console>
	I0819 11:29:25.648136  121308 main.go:141] libmachine: (ha-503856)     <rng model='virtio'>
	I0819 11:29:25.648153  121308 main.go:141] libmachine: (ha-503856)       <backend model='random'>/dev/random</backend>
	I0819 11:29:25.648164  121308 main.go:141] libmachine: (ha-503856)     </rng>
	I0819 11:29:25.648173  121308 main.go:141] libmachine: (ha-503856)     
	I0819 11:29:25.648180  121308 main.go:141] libmachine: (ha-503856)     
	I0819 11:29:25.648189  121308 main.go:141] libmachine: (ha-503856)   </devices>
	I0819 11:29:25.648200  121308 main.go:141] libmachine: (ha-503856) </domain>
	I0819 11:29:25.648210  121308 main.go:141] libmachine: (ha-503856) 
	I0819 11:29:25.652462  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:72:77:58 in network default
	I0819 11:29:25.653112  121308 main.go:141] libmachine: (ha-503856) Ensuring networks are active...
	I0819 11:29:25.653131  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:25.653799  121308 main.go:141] libmachine: (ha-503856) Ensuring network default is active
	I0819 11:29:25.654102  121308 main.go:141] libmachine: (ha-503856) Ensuring network mk-ha-503856 is active
	I0819 11:29:25.654521  121308 main.go:141] libmachine: (ha-503856) Getting domain xml...
	I0819 11:29:25.655166  121308 main.go:141] libmachine: (ha-503856) Creating domain...
	I0819 11:29:26.869502  121308 main.go:141] libmachine: (ha-503856) Waiting to get IP...
	I0819 11:29:26.870259  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:26.870614  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:26.870637  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:26.870590  121331 retry.go:31] will retry after 296.406567ms: waiting for machine to come up
	I0819 11:29:27.169294  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:27.169757  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:27.169782  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:27.169723  121331 retry.go:31] will retry after 276.081331ms: waiting for machine to come up
	I0819 11:29:27.447191  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:27.447642  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:27.447667  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:27.447604  121331 retry.go:31] will retry after 385.241682ms: waiting for machine to come up
	I0819 11:29:27.834217  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:27.834627  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:27.834658  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:27.834604  121331 retry.go:31] will retry after 586.232406ms: waiting for machine to come up
	I0819 11:29:28.422499  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:28.422826  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:28.422875  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:28.422799  121331 retry.go:31] will retry after 517.887819ms: waiting for machine to come up
	I0819 11:29:28.942704  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:28.943161  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:28.943192  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:28.943117  121331 retry.go:31] will retry after 638.927317ms: waiting for machine to come up
	I0819 11:29:29.584039  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:29.584404  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:29.584448  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:29.584361  121331 retry.go:31] will retry after 1.031172042s: waiting for machine to come up
	I0819 11:29:30.617196  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:30.617579  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:30.617604  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:30.617527  121331 retry.go:31] will retry after 1.482642322s: waiting for machine to come up
	I0819 11:29:32.102169  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:32.102589  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:32.102617  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:32.102540  121331 retry.go:31] will retry after 1.291948881s: waiting for machine to come up
	I0819 11:29:33.396112  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:33.396572  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:33.396603  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:33.396515  121331 retry.go:31] will retry after 1.881043413s: waiting for machine to come up
	I0819 11:29:35.279181  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:35.279630  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:35.279663  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:35.279612  121331 retry.go:31] will retry after 1.897450306s: waiting for machine to come up
	I0819 11:29:37.179767  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:37.180214  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:37.180241  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:37.180195  121331 retry.go:31] will retry after 3.322751014s: waiting for machine to come up
	I0819 11:29:40.504395  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:40.504881  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:40.504900  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:40.504827  121331 retry.go:31] will retry after 3.885433697s: waiting for machine to come up
	I0819 11:29:44.395167  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.395631  121308 main.go:141] libmachine: (ha-503856) Found IP for machine: 192.168.39.102
	I0819 11:29:44.395647  121308 main.go:141] libmachine: (ha-503856) Reserving static IP address...
	I0819 11:29:44.395662  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has current primary IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.396011  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find host DHCP lease matching {name: "ha-503856", mac: "52:54:00:d1:ab:80", ip: "192.168.39.102"} in network mk-ha-503856
	I0819 11:29:44.475072  121308 main.go:141] libmachine: (ha-503856) DBG | Getting to WaitForSSH function...
	I0819 11:29:44.475106  121308 main.go:141] libmachine: (ha-503856) Reserved static IP address: 192.168.39.102
	I0819 11:29:44.475121  121308 main.go:141] libmachine: (ha-503856) Waiting for SSH to be available...
	I0819 11:29:44.477916  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.478299  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.478335  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.478478  121308 main.go:141] libmachine: (ha-503856) DBG | Using SSH client type: external
	I0819 11:29:44.478511  121308 main.go:141] libmachine: (ha-503856) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa (-rw-------)
	I0819 11:29:44.478544  121308 main.go:141] libmachine: (ha-503856) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 11:29:44.478559  121308 main.go:141] libmachine: (ha-503856) DBG | About to run SSH command:
	I0819 11:29:44.478572  121308 main.go:141] libmachine: (ha-503856) DBG | exit 0
	I0819 11:29:44.603965  121308 main.go:141] libmachine: (ha-503856) DBG | SSH cmd err, output: <nil>: 
	I0819 11:29:44.604267  121308 main.go:141] libmachine: (ha-503856) KVM machine creation complete!
	I0819 11:29:44.604611  121308 main.go:141] libmachine: (ha-503856) Calling .GetConfigRaw
	I0819 11:29:44.605234  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:44.605435  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:44.605607  121308 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 11:29:44.605622  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:29:44.606987  121308 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 11:29:44.607004  121308 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 11:29:44.607012  121308 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 11:29:44.607021  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:44.609226  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.609590  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.609627  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.609777  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:44.610001  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.610222  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.610353  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:44.610511  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:44.610722  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:44.610736  121308 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 11:29:44.715112  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:29:44.715132  121308 main.go:141] libmachine: Detecting the provisioner...
	I0819 11:29:44.715140  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:44.717839  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.718198  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.718226  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.718384  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:44.718595  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.718748  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.718874  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:44.719026  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:44.719188  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:44.719199  121308 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 11:29:44.824171  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 11:29:44.824277  121308 main.go:141] libmachine: found compatible host: buildroot
	I0819 11:29:44.824292  121308 main.go:141] libmachine: Provisioning with buildroot...
	I0819 11:29:44.824304  121308 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:29:44.824628  121308 buildroot.go:166] provisioning hostname "ha-503856"
	I0819 11:29:44.824654  121308 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:29:44.824823  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:44.827275  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.827565  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.827590  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.827716  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:44.827928  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.828082  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.828197  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:44.828342  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:44.828548  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:44.828561  121308 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-503856 && echo "ha-503856" | sudo tee /etc/hostname
	I0819 11:29:44.945015  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856
	
	I0819 11:29:44.945054  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:44.947703  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.948087  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.948121  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.948304  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:44.948543  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.948726  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.948881  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:44.949022  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:44.949178  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:44.949192  121308 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-503856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-503856/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-503856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:29:45.063956  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:29:45.063986  121308 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 11:29:45.064033  121308 buildroot.go:174] setting up certificates
	I0819 11:29:45.064047  121308 provision.go:84] configureAuth start
	I0819 11:29:45.064061  121308 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:29:45.064388  121308 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:29:45.066803  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.067097  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.067128  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.067229  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.069505  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.069809  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.069836  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.069965  121308 provision.go:143] copyHostCerts
	I0819 11:29:45.069998  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:29:45.070042  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 11:29:45.070060  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:29:45.070127  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 11:29:45.070217  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:29:45.070238  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 11:29:45.070245  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:29:45.070268  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 11:29:45.070336  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:29:45.070360  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 11:29:45.070368  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:29:45.070401  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 11:29:45.070469  121308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.ha-503856 san=[127.0.0.1 192.168.39.102 ha-503856 localhost minikube]
	I0819 11:29:45.164209  121308 provision.go:177] copyRemoteCerts
	I0819 11:29:45.164278  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:29:45.164310  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.166851  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.167327  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.167361  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.167489  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.167715  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.167905  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.168078  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:29:45.249878  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 11:29:45.249969  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 11:29:45.274017  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 11:29:45.274087  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:29:45.297588  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 11:29:45.297659  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:29:45.321478  121308 provision.go:87] duration metric: took 257.404108ms to configureAuth
	I0819 11:29:45.321508  121308 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:29:45.321681  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:29:45.321760  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.324425  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.324811  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.324853  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.325040  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.325250  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.325400  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.325526  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.325666  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:45.325846  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:45.325871  121308 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:29:45.588104  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:29:45.588138  121308 main.go:141] libmachine: Checking connection to Docker...
	I0819 11:29:45.588149  121308 main.go:141] libmachine: (ha-503856) Calling .GetURL
	I0819 11:29:45.589426  121308 main.go:141] libmachine: (ha-503856) DBG | Using libvirt version 6000000
	I0819 11:29:45.591760  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.592252  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.592274  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.592501  121308 main.go:141] libmachine: Docker is up and running!
	I0819 11:29:45.592522  121308 main.go:141] libmachine: Reticulating splines...
	I0819 11:29:45.592529  121308 client.go:171] duration metric: took 20.474488342s to LocalClient.Create
	I0819 11:29:45.592552  121308 start.go:167] duration metric: took 20.474549128s to libmachine.API.Create "ha-503856"
	I0819 11:29:45.592563  121308 start.go:293] postStartSetup for "ha-503856" (driver="kvm2")
	I0819 11:29:45.592574  121308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:29:45.592590  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.592822  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:29:45.592847  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.594970  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.595304  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.595330  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.595508  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.595704  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.595878  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.596035  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:29:45.677728  121308 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:29:45.681821  121308 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:29:45.681848  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 11:29:45.681914  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 11:29:45.681986  121308 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 11:29:45.681996  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 11:29:45.682085  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:29:45.691177  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:29:45.714524  121308 start.go:296] duration metric: took 121.945037ms for postStartSetup
	I0819 11:29:45.714586  121308 main.go:141] libmachine: (ha-503856) Calling .GetConfigRaw
	I0819 11:29:45.715202  121308 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:29:45.717648  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.717977  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.718016  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.718245  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:29:45.718453  121308 start.go:128] duration metric: took 20.621534419s to createHost
	I0819 11:29:45.718475  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.720739  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.721090  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.721117  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.721288  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.721487  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.721658  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.721812  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.721962  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:45.722164  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:45.722176  121308 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:29:45.828348  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724066985.800962210
	
	I0819 11:29:45.828376  121308 fix.go:216] guest clock: 1724066985.800962210
	I0819 11:29:45.828387  121308 fix.go:229] Guest: 2024-08-19 11:29:45.80096221 +0000 UTC Remote: 2024-08-19 11:29:45.718464633 +0000 UTC m=+20.731826657 (delta=82.497577ms)
	I0819 11:29:45.828409  121308 fix.go:200] guest clock delta is within tolerance: 82.497577ms
	I0819 11:29:45.828414  121308 start.go:83] releasing machines lock for "ha-503856", held for 20.731595853s
	I0819 11:29:45.828432  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.828742  121308 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:29:45.831183  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.831496  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.831533  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.831648  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.832259  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.832455  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.832554  121308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:29:45.832609  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.832661  121308 ssh_runner.go:195] Run: cat /version.json
	I0819 11:29:45.832687  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.835004  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.835076  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.835421  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.835454  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.835475  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.835490  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.835628  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.835663  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.835828  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.835836  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.835979  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.835987  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.836117  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:29:45.836116  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:29:45.912640  121308 ssh_runner.go:195] Run: systemctl --version
	I0819 11:29:45.933374  121308 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:29:46.087190  121308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:29:46.093838  121308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:29:46.093904  121308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:29:46.109026  121308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:29:46.109055  121308 start.go:495] detecting cgroup driver to use...
	I0819 11:29:46.109129  121308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:29:46.124862  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:29:46.138847  121308 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:29:46.138912  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:29:46.153299  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:29:46.167932  121308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:29:46.288292  121308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:29:46.458556  121308 docker.go:233] disabling docker service ...
	I0819 11:29:46.458652  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:29:46.473035  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:29:46.486416  121308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:29:46.614865  121308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:29:46.748884  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:29:46.762268  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:29:46.780298  121308 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:29:46.780378  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.790974  121308 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:29:46.791039  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.801482  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.811862  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.822358  121308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:29:46.832997  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.843401  121308 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.860306  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.870896  121308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:29:46.880199  121308 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 11:29:46.880269  121308 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 11:29:46.893533  121308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:29:46.903338  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:29:47.030300  121308 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:29:47.163347  121308 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:29:47.163438  121308 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:29:47.168142  121308 start.go:563] Will wait 60s for crictl version
	I0819 11:29:47.168210  121308 ssh_runner.go:195] Run: which crictl
	I0819 11:29:47.171837  121308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:29:47.210346  121308 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:29:47.210433  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:29:47.238323  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:29:47.267905  121308 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:29:47.269300  121308 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:29:47.272144  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:47.272560  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:47.272587  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:47.272809  121308 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:29:47.276897  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:29:47.289872  121308 kubeadm.go:883] updating cluster {Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:29:47.289997  121308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:29:47.290053  121308 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:29:47.321530  121308 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 11:29:47.321602  121308 ssh_runner.go:195] Run: which lz4
	I0819 11:29:47.325560  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 11:29:47.325677  121308 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:29:47.329750  121308 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:29:47.329793  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 11:29:48.565735  121308 crio.go:462] duration metric: took 1.240087569s to copy over tarball
	I0819 11:29:48.565816  121308 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:29:50.656031  121308 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.090179988s)
	I0819 11:29:50.656067  121308 crio.go:469] duration metric: took 2.09030002s to extract the tarball
	I0819 11:29:50.656077  121308 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:29:50.694696  121308 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:29:50.735948  121308 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:29:50.735975  121308 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:29:50.735983  121308 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.0 crio true true} ...
	I0819 11:29:50.736128  121308 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-503856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:29:50.736196  121308 ssh_runner.go:195] Run: crio config
	I0819 11:29:50.785870  121308 cni.go:84] Creating CNI manager for ""
	I0819 11:29:50.785890  121308 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:29:50.785898  121308 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:29:50.785919  121308 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-503856 NodeName:ha-503856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:29:50.786046  121308 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-503856"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:29:50.786071  121308 kube-vip.go:115] generating kube-vip config ...
	I0819 11:29:50.786115  121308 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 11:29:50.803283  121308 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 11:29:50.803405  121308 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 11:29:50.803466  121308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:29:50.813282  121308 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:29:50.813350  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 11:29:50.822899  121308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 11:29:50.839252  121308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:29:50.855440  121308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 11:29:50.871819  121308 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 11:29:50.887822  121308 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 11:29:50.891655  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:29:50.903950  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:29:51.035237  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:29:51.051783  121308 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856 for IP: 192.168.39.102
	I0819 11:29:51.051809  121308 certs.go:194] generating shared ca certs ...
	I0819 11:29:51.051825  121308 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.051999  121308 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 11:29:51.052058  121308 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 11:29:51.052071  121308 certs.go:256] generating profile certs ...
	I0819 11:29:51.052162  121308 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key
	I0819 11:29:51.052194  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt with IP's: []
	I0819 11:29:51.270504  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt ...
	I0819 11:29:51.270539  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt: {Name:mk9a88274d45fc56fb7a425e3de1e21485ead09f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.270741  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key ...
	I0819 11:29:51.270755  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key: {Name:mk60b21abe048b27494c96025e666ab2288eae45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.270860  121308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.333fb727
	I0819 11:29:51.270876  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.333fb727 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I0819 11:29:51.494646  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.333fb727 ...
	I0819 11:29:51.494678  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.333fb727: {Name:mk41b07f16ec35f77ef14672e9516d40d7f2b12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.494863  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.333fb727 ...
	I0819 11:29:51.494879  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.333fb727: {Name:mk81aa2b88e45024ea1afdd52c3744c0cc1a2bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.494973  121308 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.333fb727 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt
	I0819 11:29:51.495051  121308 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.333fb727 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key
	I0819 11:29:51.495106  121308 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key
	I0819 11:29:51.495120  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt with IP's: []
	I0819 11:29:51.636785  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt ...
	I0819 11:29:51.636814  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt: {Name:mk26e8fa9747f87d776243cca11643b6f4dc6224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.636995  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key ...
	I0819 11:29:51.637008  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key: {Name:mkc1e8c4b0167d5c4219c6cd16298094535b3d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.637102  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 11:29:51.637122  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 11:29:51.637133  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 11:29:51.637146  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 11:29:51.637158  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 11:29:51.637170  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 11:29:51.637181  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 11:29:51.637192  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 11:29:51.637253  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 11:29:51.637290  121308 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 11:29:51.637299  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:29:51.637319  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:29:51.637390  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:29:51.637417  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 11:29:51.637467  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:29:51.637496  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:29:51.637511  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 11:29:51.637523  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 11:29:51.638063  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:29:51.663063  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:29:51.686885  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:29:51.711523  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:29:51.735720  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 11:29:51.763344  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 11:29:51.789570  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:29:51.837925  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:29:51.862448  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:29:51.885558  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 11:29:51.908522  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 11:29:51.931909  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:29:51.948273  121308 ssh_runner.go:195] Run: openssl version
	I0819 11:29:51.953992  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 11:29:51.965327  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 11:29:51.969714  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 11:29:51.969791  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 11:29:51.975410  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:29:51.986334  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:29:51.997623  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:29:52.002081  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:29:52.002168  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:29:52.007737  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:29:52.018637  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 11:29:52.029514  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 11:29:52.033753  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 11:29:52.033837  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 11:29:52.039589  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 11:29:52.050782  121308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:29:52.054807  121308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:29:52.054875  121308 kubeadm.go:392] StartCluster: {Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:52.054967  121308 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 11:29:52.055026  121308 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:29:52.090316  121308 cri.go:89] found id: ""
	I0819 11:29:52.090394  121308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:29:52.100701  121308 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:29:52.111233  121308 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:29:52.122410  121308 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:29:52.122432  121308 kubeadm.go:157] found existing configuration files:
	
	I0819 11:29:52.122490  121308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 11:29:52.131490  121308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:29:52.131553  121308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:29:52.141136  121308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 11:29:52.150138  121308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:29:52.150203  121308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:29:52.160566  121308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 11:29:52.170302  121308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:29:52.170370  121308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:29:52.180199  121308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 11:29:52.190011  121308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:29:52.190088  121308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:29:52.199926  121308 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:29:52.300321  121308 kubeadm.go:310] W0819 11:29:52.281176     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:29:52.301119  121308 kubeadm.go:310] W0819 11:29:52.282038     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:29:52.399857  121308 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:30:06.546603  121308 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 11:30:06.546686  121308 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:30:06.546781  121308 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:30:06.546931  121308 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:30:06.547047  121308 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 11:30:06.547144  121308 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:30:06.548546  121308 out.go:235]   - Generating certificates and keys ...
	I0819 11:30:06.548623  121308 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:30:06.548674  121308 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:30:06.548781  121308 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 11:30:06.548866  121308 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 11:30:06.548919  121308 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 11:30:06.548962  121308 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 11:30:06.549014  121308 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 11:30:06.549132  121308 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-503856 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0819 11:30:06.549204  121308 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 11:30:06.549363  121308 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-503856 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0819 11:30:06.549486  121308 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 11:30:06.549562  121308 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 11:30:06.549609  121308 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 11:30:06.549662  121308 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:30:06.549717  121308 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:30:06.549769  121308 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 11:30:06.549847  121308 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:30:06.549905  121308 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:30:06.549951  121308 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:30:06.550025  121308 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:30:06.550089  121308 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:30:06.552126  121308 out.go:235]   - Booting up control plane ...
	I0819 11:30:06.552206  121308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:30:06.552270  121308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:30:06.552331  121308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:30:06.552451  121308 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:30:06.552556  121308 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:30:06.552624  121308 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:30:06.552760  121308 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 11:30:06.552897  121308 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 11:30:06.552956  121308 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.486946ms
	I0819 11:30:06.553055  121308 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 11:30:06.553133  121308 kubeadm.go:310] [api-check] The API server is healthy after 8.93290721s
	I0819 11:30:06.553260  121308 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:30:06.553380  121308 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:30:06.553455  121308 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:30:06.553685  121308 kubeadm.go:310] [mark-control-plane] Marking the node ha-503856 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:30:06.553771  121308 kubeadm.go:310] [bootstrap-token] Using token: yabek6.lq4ketpzskifobiz
	I0819 11:30:06.554982  121308 out.go:235]   - Configuring RBAC rules ...
	I0819 11:30:06.555100  121308 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:30:06.555202  121308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:30:06.555356  121308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:30:06.555516  121308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:30:06.555639  121308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:30:06.555785  121308 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:30:06.555919  121308 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:30:06.555981  121308 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:30:06.556039  121308 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:30:06.556048  121308 kubeadm.go:310] 
	I0819 11:30:06.556097  121308 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:30:06.556103  121308 kubeadm.go:310] 
	I0819 11:30:06.556172  121308 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:30:06.556178  121308 kubeadm.go:310] 
	I0819 11:30:06.556199  121308 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:30:06.556272  121308 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:30:06.556350  121308 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:30:06.556358  121308 kubeadm.go:310] 
	I0819 11:30:06.556429  121308 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:30:06.556438  121308 kubeadm.go:310] 
	I0819 11:30:06.556517  121308 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:30:06.556527  121308 kubeadm.go:310] 
	I0819 11:30:06.556607  121308 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:30:06.556705  121308 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:30:06.556782  121308 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:30:06.556788  121308 kubeadm.go:310] 
	I0819 11:30:06.556861  121308 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:30:06.556929  121308 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:30:06.556936  121308 kubeadm.go:310] 
	I0819 11:30:06.557008  121308 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yabek6.lq4ketpzskifobiz \
	I0819 11:30:06.557103  121308 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 \
	I0819 11:30:06.557122  121308 kubeadm.go:310] 	--control-plane 
	I0819 11:30:06.557128  121308 kubeadm.go:310] 
	I0819 11:30:06.557237  121308 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:30:06.557244  121308 kubeadm.go:310] 
	I0819 11:30:06.557314  121308 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yabek6.lq4ketpzskifobiz \
	I0819 11:30:06.557413  121308 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 
	I0819 11:30:06.557424  121308 cni.go:84] Creating CNI manager for ""
	I0819 11:30:06.557429  121308 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:30:06.558979  121308 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 11:30:06.560153  121308 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 11:30:06.565679  121308 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 11:30:06.565705  121308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 11:30:06.585137  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 11:30:06.909534  121308 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:30:06.909607  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:30:06.909639  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-503856 minikube.k8s.io/updated_at=2024_08_19T11_30_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=ha-503856 minikube.k8s.io/primary=true
	I0819 11:30:07.119368  121308 ops.go:34] apiserver oom_adj: -16
	I0819 11:30:07.119519  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:30:07.248005  121308 kubeadm.go:1113] duration metric: took 338.457988ms to wait for elevateKubeSystemPrivileges
	I0819 11:30:07.248052  121308 kubeadm.go:394] duration metric: took 15.193184523s to StartCluster
	I0819 11:30:07.248075  121308 settings.go:142] acquiring lock: {Name:mk5d5753fc545a0b5fdfa44a1e5cbc5d198d9dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:07.248156  121308 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:30:07.248847  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/kubeconfig: {Name:mk73914d2bd0db664ade6c952753a7dd30404784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:07.249064  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 11:30:07.249064  121308 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:30:07.249089  121308 start.go:241] waiting for startup goroutines ...
	I0819 11:30:07.249101  121308 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:30:07.249196  121308 addons.go:69] Setting storage-provisioner=true in profile "ha-503856"
	I0819 11:30:07.249212  121308 addons.go:69] Setting default-storageclass=true in profile "ha-503856"
	I0819 11:30:07.249232  121308 addons.go:234] Setting addon storage-provisioner=true in "ha-503856"
	I0819 11:30:07.249250  121308 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-503856"
	I0819 11:30:07.249251  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:07.249274  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:30:07.249694  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.249714  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.249737  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.249761  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.265767  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I0819 11:30:07.265832  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I0819 11:30:07.266265  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.266314  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.266808  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.266825  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.266977  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.267000  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.267194  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.267342  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.267507  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:30:07.267745  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.267776  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.269663  121308 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:30:07.269905  121308 kapi.go:59] client config for ha-503856: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt", KeyFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key", CAFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:30:07.270382  121308 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 11:30:07.270638  121308 addons.go:234] Setting addon default-storageclass=true in "ha-503856"
	I0819 11:30:07.270672  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:30:07.270952  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.270986  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.283689  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0819 11:30:07.284162  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.284752  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.284783  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.285161  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.285407  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:30:07.286517  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0819 11:30:07.286908  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.287185  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:30:07.287409  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.287431  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.287935  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.288407  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.288433  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.289086  121308 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:30:07.290474  121308 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:30:07.290499  121308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:30:07.290522  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:30:07.293413  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:07.293863  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:30:07.293892  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:07.294038  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:30:07.294200  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:30:07.294339  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:30:07.294448  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:30:07.304382  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I0819 11:30:07.304936  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.305477  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.305504  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.305822  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.306032  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:30:07.307809  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:30:07.308080  121308 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:30:07.308098  121308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:30:07.308119  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:30:07.311908  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:30:07.311986  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:07.312030  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:30:07.312069  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:07.312246  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:30:07.312967  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:30:07.313160  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:30:07.386606  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 11:30:07.453696  121308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:30:07.482457  121308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:30:07.710263  121308 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 11:30:08.060035  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.060061  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.060069  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.060095  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.060357  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.060380  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.060391  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.060400  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.060434  121308 main.go:141] libmachine: (ha-503856) DBG | Closing plugin on server side
	I0819 11:30:08.060437  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.060457  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.060466  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.060474  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.060645  121308 main.go:141] libmachine: (ha-503856) DBG | Closing plugin on server side
	I0819 11:30:08.060671  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.060677  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.060776  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.060789  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.060846  121308 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:30:08.060867  121308 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:30:08.060964  121308 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 11:30:08.060975  121308 round_trippers.go:469] Request Headers:
	I0819 11:30:08.060985  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:30:08.060993  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:30:08.083458  121308 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0819 11:30:08.084702  121308 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 11:30:08.084722  121308 round_trippers.go:469] Request Headers:
	I0819 11:30:08.084732  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:30:08.084738  121308 round_trippers.go:473]     Content-Type: application/json
	I0819 11:30:08.084744  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:30:08.100808  121308 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0819 11:30:08.101030  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.101055  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.101425  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.101485  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.101528  121308 main.go:141] libmachine: (ha-503856) DBG | Closing plugin on server side
	I0819 11:30:08.103230  121308 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 11:30:08.104327  121308 addons.go:510] duration metric: took 855.225246ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 11:30:08.104372  121308 start.go:246] waiting for cluster config update ...
	I0819 11:30:08.104384  121308 start.go:255] writing updated cluster config ...
	I0819 11:30:08.105911  121308 out.go:201] 
	I0819 11:30:08.107390  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:08.107480  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:30:08.109052  121308 out.go:177] * Starting "ha-503856-m02" control-plane node in "ha-503856" cluster
	I0819 11:30:08.110115  121308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:30:08.110150  121308 cache.go:56] Caching tarball of preloaded images
	I0819 11:30:08.110265  121308 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:30:08.110282  121308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:30:08.110379  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:30:08.110595  121308 start.go:360] acquireMachinesLock for ha-503856-m02: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:08.110658  121308 start.go:364] duration metric: took 39.83µs to acquireMachinesLock for "ha-503856-m02"
	I0819 11:30:08.110683  121308 start.go:93] Provisioning new machine with config: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:30:08.110763  121308 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 11:30:08.112395  121308 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:08.112515  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:08.112542  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:08.128109  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0819 11:30:08.128643  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:08.129206  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:08.129228  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:08.129554  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:08.129809  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetMachineName
	I0819 11:30:08.130006  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:08.130207  121308 start.go:159] libmachine.API.Create for "ha-503856" (driver="kvm2")
	I0819 11:30:08.130232  121308 client.go:168] LocalClient.Create starting
	I0819 11:30:08.130260  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 11:30:08.130294  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:08.130310  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:08.130362  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 11:30:08.130383  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:08.130393  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:08.130409  121308 main.go:141] libmachine: Running pre-create checks...
	I0819 11:30:08.130417  121308 main.go:141] libmachine: (ha-503856-m02) Calling .PreCreateCheck
	I0819 11:30:08.130604  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetConfigRaw
	I0819 11:30:08.131002  121308 main.go:141] libmachine: Creating machine...
	I0819 11:30:08.131014  121308 main.go:141] libmachine: (ha-503856-m02) Calling .Create
	I0819 11:30:08.131163  121308 main.go:141] libmachine: (ha-503856-m02) Creating KVM machine...
	I0819 11:30:08.132517  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found existing default KVM network
	I0819 11:30:08.132683  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found existing private KVM network mk-ha-503856
	I0819 11:30:08.132816  121308 main.go:141] libmachine: (ha-503856-m02) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02 ...
	I0819 11:30:08.132842  121308 main.go:141] libmachine: (ha-503856-m02) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:30:08.132895  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:08.132790  121667 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:30:08.132996  121308 main.go:141] libmachine: (ha-503856-m02) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:30:08.389790  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:08.389614  121667 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa...
	I0819 11:30:08.583984  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:08.583797  121667 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/ha-503856-m02.rawdisk...
	I0819 11:30:08.584027  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Writing magic tar header
	I0819 11:30:08.584042  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Writing SSH key tar header
	I0819 11:30:08.584055  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:08.583938  121667 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02 ...
	I0819 11:30:08.584071  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02
	I0819 11:30:08.584090  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02 (perms=drwx------)
	I0819 11:30:08.584104  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 11:30:08.584116  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:30:08.584123  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 11:30:08.584134  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 11:30:08.584147  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 11:30:08.584158  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home
	I0819 11:30:08.584167  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Skipping /home - not owner
	I0819 11:30:08.584179  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 11:30:08.584216  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 11:30:08.584237  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 11:30:08.584246  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 11:30:08.584257  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 11:30:08.584291  121308 main.go:141] libmachine: (ha-503856-m02) Creating domain...
	I0819 11:30:08.585250  121308 main.go:141] libmachine: (ha-503856-m02) define libvirt domain using xml: 
	I0819 11:30:08.585272  121308 main.go:141] libmachine: (ha-503856-m02) <domain type='kvm'>
	I0819 11:30:08.585282  121308 main.go:141] libmachine: (ha-503856-m02)   <name>ha-503856-m02</name>
	I0819 11:30:08.585294  121308 main.go:141] libmachine: (ha-503856-m02)   <memory unit='MiB'>2200</memory>
	I0819 11:30:08.585304  121308 main.go:141] libmachine: (ha-503856-m02)   <vcpu>2</vcpu>
	I0819 11:30:08.585313  121308 main.go:141] libmachine: (ha-503856-m02)   <features>
	I0819 11:30:08.585324  121308 main.go:141] libmachine: (ha-503856-m02)     <acpi/>
	I0819 11:30:08.585334  121308 main.go:141] libmachine: (ha-503856-m02)     <apic/>
	I0819 11:30:08.585342  121308 main.go:141] libmachine: (ha-503856-m02)     <pae/>
	I0819 11:30:08.585355  121308 main.go:141] libmachine: (ha-503856-m02)     
	I0819 11:30:08.585366  121308 main.go:141] libmachine: (ha-503856-m02)   </features>
	I0819 11:30:08.585376  121308 main.go:141] libmachine: (ha-503856-m02)   <cpu mode='host-passthrough'>
	I0819 11:30:08.585384  121308 main.go:141] libmachine: (ha-503856-m02)   
	I0819 11:30:08.585391  121308 main.go:141] libmachine: (ha-503856-m02)   </cpu>
	I0819 11:30:08.585402  121308 main.go:141] libmachine: (ha-503856-m02)   <os>
	I0819 11:30:08.585413  121308 main.go:141] libmachine: (ha-503856-m02)     <type>hvm</type>
	I0819 11:30:08.585435  121308 main.go:141] libmachine: (ha-503856-m02)     <boot dev='cdrom'/>
	I0819 11:30:08.585452  121308 main.go:141] libmachine: (ha-503856-m02)     <boot dev='hd'/>
	I0819 11:30:08.585463  121308 main.go:141] libmachine: (ha-503856-m02)     <bootmenu enable='no'/>
	I0819 11:30:08.585468  121308 main.go:141] libmachine: (ha-503856-m02)   </os>
	I0819 11:30:08.585476  121308 main.go:141] libmachine: (ha-503856-m02)   <devices>
	I0819 11:30:08.585481  121308 main.go:141] libmachine: (ha-503856-m02)     <disk type='file' device='cdrom'>
	I0819 11:30:08.585491  121308 main.go:141] libmachine: (ha-503856-m02)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/boot2docker.iso'/>
	I0819 11:30:08.585497  121308 main.go:141] libmachine: (ha-503856-m02)       <target dev='hdc' bus='scsi'/>
	I0819 11:30:08.585506  121308 main.go:141] libmachine: (ha-503856-m02)       <readonly/>
	I0819 11:30:08.585510  121308 main.go:141] libmachine: (ha-503856-m02)     </disk>
	I0819 11:30:08.585516  121308 main.go:141] libmachine: (ha-503856-m02)     <disk type='file' device='disk'>
	I0819 11:30:08.585532  121308 main.go:141] libmachine: (ha-503856-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 11:30:08.585544  121308 main.go:141] libmachine: (ha-503856-m02)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/ha-503856-m02.rawdisk'/>
	I0819 11:30:08.585552  121308 main.go:141] libmachine: (ha-503856-m02)       <target dev='hda' bus='virtio'/>
	I0819 11:30:08.585558  121308 main.go:141] libmachine: (ha-503856-m02)     </disk>
	I0819 11:30:08.585565  121308 main.go:141] libmachine: (ha-503856-m02)     <interface type='network'>
	I0819 11:30:08.585571  121308 main.go:141] libmachine: (ha-503856-m02)       <source network='mk-ha-503856'/>
	I0819 11:30:08.585578  121308 main.go:141] libmachine: (ha-503856-m02)       <model type='virtio'/>
	I0819 11:30:08.585584  121308 main.go:141] libmachine: (ha-503856-m02)     </interface>
	I0819 11:30:08.585590  121308 main.go:141] libmachine: (ha-503856-m02)     <interface type='network'>
	I0819 11:30:08.585615  121308 main.go:141] libmachine: (ha-503856-m02)       <source network='default'/>
	I0819 11:30:08.585637  121308 main.go:141] libmachine: (ha-503856-m02)       <model type='virtio'/>
	I0819 11:30:08.585649  121308 main.go:141] libmachine: (ha-503856-m02)     </interface>
	I0819 11:30:08.585659  121308 main.go:141] libmachine: (ha-503856-m02)     <serial type='pty'>
	I0819 11:30:08.585666  121308 main.go:141] libmachine: (ha-503856-m02)       <target port='0'/>
	I0819 11:30:08.585671  121308 main.go:141] libmachine: (ha-503856-m02)     </serial>
	I0819 11:30:08.585679  121308 main.go:141] libmachine: (ha-503856-m02)     <console type='pty'>
	I0819 11:30:08.585692  121308 main.go:141] libmachine: (ha-503856-m02)       <target type='serial' port='0'/>
	I0819 11:30:08.585706  121308 main.go:141] libmachine: (ha-503856-m02)     </console>
	I0819 11:30:08.585721  121308 main.go:141] libmachine: (ha-503856-m02)     <rng model='virtio'>
	I0819 11:30:08.585735  121308 main.go:141] libmachine: (ha-503856-m02)       <backend model='random'>/dev/random</backend>
	I0819 11:30:08.585745  121308 main.go:141] libmachine: (ha-503856-m02)     </rng>
	I0819 11:30:08.585756  121308 main.go:141] libmachine: (ha-503856-m02)     
	I0819 11:30:08.585763  121308 main.go:141] libmachine: (ha-503856-m02)     
	I0819 11:30:08.585770  121308 main.go:141] libmachine: (ha-503856-m02)   </devices>
	I0819 11:30:08.585779  121308 main.go:141] libmachine: (ha-503856-m02) </domain>
	I0819 11:30:08.585792  121308 main.go:141] libmachine: (ha-503856-m02) 
	I0819 11:30:08.592506  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:a8:75:24 in network default
	I0819 11:30:08.593200  121308 main.go:141] libmachine: (ha-503856-m02) Ensuring networks are active...
	I0819 11:30:08.593230  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:08.593954  121308 main.go:141] libmachine: (ha-503856-m02) Ensuring network default is active
	I0819 11:30:08.594280  121308 main.go:141] libmachine: (ha-503856-m02) Ensuring network mk-ha-503856 is active
	I0819 11:30:08.594611  121308 main.go:141] libmachine: (ha-503856-m02) Getting domain xml...
	I0819 11:30:08.595301  121308 main.go:141] libmachine: (ha-503856-m02) Creating domain...
	I0819 11:30:09.826428  121308 main.go:141] libmachine: (ha-503856-m02) Waiting to get IP...
	I0819 11:30:09.827339  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:09.827832  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:09.827867  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:09.827803  121667 retry.go:31] will retry after 299.927656ms: waiting for machine to come up
	I0819 11:30:10.129376  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:10.129988  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:10.130021  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:10.129940  121667 retry.go:31] will retry after 311.299317ms: waiting for machine to come up
	I0819 11:30:10.443603  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:10.443986  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:10.444012  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:10.443955  121667 retry.go:31] will retry after 295.003949ms: waiting for machine to come up
	I0819 11:30:10.740642  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:10.741084  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:10.741113  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:10.741048  121667 retry.go:31] will retry after 513.484638ms: waiting for machine to come up
	I0819 11:30:11.255793  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:11.256269  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:11.256294  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:11.256245  121667 retry.go:31] will retry after 566.925586ms: waiting for machine to come up
	I0819 11:30:11.825259  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:11.825767  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:11.825811  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:11.825738  121667 retry.go:31] will retry after 700.755721ms: waiting for machine to come up
	I0819 11:30:12.527531  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:12.528038  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:12.528065  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:12.527993  121667 retry.go:31] will retry after 797.139943ms: waiting for machine to come up
	I0819 11:30:13.326500  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:13.326995  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:13.327017  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:13.326953  121667 retry.go:31] will retry after 1.316756605s: waiting for machine to come up
	I0819 11:30:14.645396  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:14.645791  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:14.645825  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:14.645741  121667 retry.go:31] will retry after 1.440866555s: waiting for machine to come up
	I0819 11:30:16.088424  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:16.088883  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:16.088916  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:16.088837  121667 retry.go:31] will retry after 1.484428334s: waiting for machine to come up
	I0819 11:30:17.575583  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:17.576094  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:17.576117  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:17.576050  121667 retry.go:31] will retry after 1.746492547s: waiting for machine to come up
	I0819 11:30:19.324664  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:19.325115  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:19.325145  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:19.325073  121667 retry.go:31] will retry after 2.555649627s: waiting for machine to come up
	I0819 11:30:21.883814  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:21.884198  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:21.884223  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:21.884163  121667 retry.go:31] will retry after 4.287218616s: waiting for machine to come up
	I0819 11:30:26.174809  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:26.175121  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:26.175146  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:26.175084  121667 retry.go:31] will retry after 4.431060865s: waiting for machine to come up
	I0819 11:30:30.608735  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.609284  121308 main.go:141] libmachine: (ha-503856-m02) Found IP for machine: 192.168.39.183
	I0819 11:30:30.609311  121308 main.go:141] libmachine: (ha-503856-m02) Reserving static IP address...
	I0819 11:30:30.609326  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has current primary IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.609690  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find host DHCP lease matching {name: "ha-503856-m02", mac: "52:54:00:f7:a0:c4", ip: "192.168.39.183"} in network mk-ha-503856
	I0819 11:30:30.690434  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Getting to WaitForSSH function...
	I0819 11:30:30.690461  121308 main.go:141] libmachine: (ha-503856-m02) Reserved static IP address: 192.168.39.183
	I0819 11:30:30.690475  121308 main.go:141] libmachine: (ha-503856-m02) Waiting for SSH to be available...
	I0819 11:30:30.693230  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.693633  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:30.693665  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.693784  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Using SSH client type: external
	I0819 11:30:30.693811  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa (-rw-------)
	I0819 11:30:30.693843  121308 main.go:141] libmachine: (ha-503856-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 11:30:30.693854  121308 main.go:141] libmachine: (ha-503856-m02) DBG | About to run SSH command:
	I0819 11:30:30.693886  121308 main.go:141] libmachine: (ha-503856-m02) DBG | exit 0
	I0819 11:30:30.820077  121308 main.go:141] libmachine: (ha-503856-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 11:30:30.820545  121308 main.go:141] libmachine: (ha-503856-m02) KVM machine creation complete!
	I0819 11:30:30.820897  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetConfigRaw
	I0819 11:30:30.821423  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:30.821680  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:30.821884  121308 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 11:30:30.821898  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:30:30.823125  121308 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 11:30:30.823141  121308 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 11:30:30.823152  121308 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 11:30:30.823158  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:30.825412  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.825831  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:30.825870  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.825986  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:30.826173  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:30.826342  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:30.826472  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:30.826650  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:30.826858  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:30.826877  121308 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 11:30:30.934882  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:30:30.934908  121308 main.go:141] libmachine: Detecting the provisioner...
	I0819 11:30:30.934916  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:30.937760  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.938174  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:30.938201  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.938321  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:30.938551  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:30.938701  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:30.938946  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:30.939113  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:30.939291  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:30.939304  121308 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 11:30:31.048405  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 11:30:31.048491  121308 main.go:141] libmachine: found compatible host: buildroot
	I0819 11:30:31.048501  121308 main.go:141] libmachine: Provisioning with buildroot...
	I0819 11:30:31.048509  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetMachineName
	I0819 11:30:31.048768  121308 buildroot.go:166] provisioning hostname "ha-503856-m02"
	I0819 11:30:31.048797  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetMachineName
	I0819 11:30:31.048986  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.051548  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.051960  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.051995  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.052156  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.052353  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.052495  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.052744  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.052951  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:31.053118  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:31.053130  121308 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-503856-m02 && echo "ha-503856-m02" | sudo tee /etc/hostname
	I0819 11:30:31.173786  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856-m02
	
	I0819 11:30:31.173823  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.176922  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.177296  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.177326  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.177504  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.177745  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.177916  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.178069  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.178241  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:31.178409  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:31.178423  121308 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-503856-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-503856-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-503856-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:30:31.293191  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:30:31.293225  121308 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 11:30:31.293242  121308 buildroot.go:174] setting up certificates
	I0819 11:30:31.293256  121308 provision.go:84] configureAuth start
	I0819 11:30:31.293267  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetMachineName
	I0819 11:30:31.293589  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:30:31.296212  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.296559  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.296588  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.296783  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.299091  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.299458  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.299487  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.299640  121308 provision.go:143] copyHostCerts
	I0819 11:30:31.299670  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:30:31.299702  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 11:30:31.299710  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:30:31.299825  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 11:30:31.299948  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:30:31.299976  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 11:30:31.299984  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:30:31.300017  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 11:30:31.300074  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:30:31.300100  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 11:30:31.300110  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:30:31.300143  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 11:30:31.300218  121308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.ha-503856-m02 san=[127.0.0.1 192.168.39.183 ha-503856-m02 localhost minikube]
	I0819 11:30:31.427800  121308 provision.go:177] copyRemoteCerts
	I0819 11:30:31.427860  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:30:31.427888  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.430576  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.430972  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.430999  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.431226  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.431384  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.431573  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.431680  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:30:31.513374  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 11:30:31.513451  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:30:31.537929  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 11:30:31.538005  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:30:31.561451  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 11:30:31.561522  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:30:31.585683  121308 provision.go:87] duration metric: took 292.413889ms to configureAuth
	I0819 11:30:31.585715  121308 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:30:31.585891  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:31.585969  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.588785  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.589189  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.589220  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.589434  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.589671  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.589835  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.589966  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.590200  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:31.590361  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:31.590376  121308 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:30:31.858131  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:30:31.858162  121308 main.go:141] libmachine: Checking connection to Docker...
	I0819 11:30:31.858173  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetURL
	I0819 11:30:31.859585  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Using libvirt version 6000000
	I0819 11:30:31.861824  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.862204  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.862229  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.862387  121308 main.go:141] libmachine: Docker is up and running!
	I0819 11:30:31.862401  121308 main.go:141] libmachine: Reticulating splines...
	I0819 11:30:31.862408  121308 client.go:171] duration metric: took 23.732169027s to LocalClient.Create
	I0819 11:30:31.862431  121308 start.go:167] duration metric: took 23.73222649s to libmachine.API.Create "ha-503856"
	I0819 11:30:31.862454  121308 start.go:293] postStartSetup for "ha-503856-m02" (driver="kvm2")
	I0819 11:30:31.862467  121308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:30:31.862485  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:31.862762  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:30:31.862790  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.865315  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.865638  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.865667  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.865870  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.866061  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.866206  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.866313  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:30:31.950447  121308 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:30:31.954913  121308 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:30:31.954939  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 11:30:31.955007  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 11:30:31.955098  121308 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 11:30:31.955111  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 11:30:31.955224  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:30:31.966063  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:30:31.989442  121308 start.go:296] duration metric: took 126.969365ms for postStartSetup
	I0819 11:30:31.989502  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetConfigRaw
	I0819 11:30:31.990112  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:30:31.992933  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.993258  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.993284  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.993519  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:30:31.993720  121308 start.go:128] duration metric: took 23.882946899s to createHost
	I0819 11:30:31.993743  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.995746  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.996103  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.996133  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.996339  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.996547  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.996739  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.996870  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.997017  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:31.997188  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:31.997199  121308 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:30:32.108440  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724067032.084837525
	
	I0819 11:30:32.108462  121308 fix.go:216] guest clock: 1724067032.084837525
	I0819 11:30:32.108472  121308 fix.go:229] Guest: 2024-08-19 11:30:32.084837525 +0000 UTC Remote: 2024-08-19 11:30:31.993731508 +0000 UTC m=+67.007093531 (delta=91.106017ms)
	I0819 11:30:32.108488  121308 fix.go:200] guest clock delta is within tolerance: 91.106017ms
	I0819 11:30:32.108493  121308 start.go:83] releasing machines lock for "ha-503856-m02", held for 23.997823637s
	I0819 11:30:32.108516  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:32.108789  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:30:32.111710  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.112085  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:32.112106  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.114598  121308 out.go:177] * Found network options:
	I0819 11:30:32.116087  121308 out.go:177]   - NO_PROXY=192.168.39.102
	W0819 11:30:32.117413  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 11:30:32.117452  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:32.118107  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:32.118324  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:32.118429  121308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:30:32.118481  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	W0819 11:30:32.118501  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 11:30:32.118572  121308 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:30:32.118590  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:32.121159  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.121469  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:32.121497  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.121518  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.121619  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:32.121843  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:32.121942  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:32.121966  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.122007  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:32.122127  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:32.122192  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:30:32.122282  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:32.122415  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:32.122545  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:30:32.361893  121308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:30:32.367427  121308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:30:32.367508  121308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:30:32.383095  121308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:30:32.383128  121308 start.go:495] detecting cgroup driver to use...
	I0819 11:30:32.383213  121308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:30:32.399017  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:30:32.413333  121308 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:30:32.413391  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:30:32.427045  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:30:32.440483  121308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:30:32.554335  121308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:30:32.722708  121308 docker.go:233] disabling docker service ...
	I0819 11:30:32.722791  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:30:32.737323  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:30:32.750688  121308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:30:32.866584  121308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:30:33.000130  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:30:33.014527  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:30:33.033199  121308 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:30:33.033267  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.043906  121308 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:30:33.043988  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.054852  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.065887  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.076866  121308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:30:33.087958  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.098863  121308 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.116386  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.127225  121308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:30:33.137169  121308 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 11:30:33.137237  121308 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 11:30:33.151498  121308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:30:33.161812  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:30:33.283359  121308 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:30:33.415690  121308 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:30:33.415778  121308 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:30:33.420433  121308 start.go:563] Will wait 60s for crictl version
	I0819 11:30:33.420518  121308 ssh_runner.go:195] Run: which crictl
	I0819 11:30:33.424267  121308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:30:33.458933  121308 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:30:33.459018  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:30:33.487119  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:30:33.516093  121308 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:30:33.517495  121308 out.go:177]   - env NO_PROXY=192.168.39.102
	I0819 11:30:33.518782  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:30:33.521533  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:33.521862  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:33.521897  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:33.522107  121308 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:30:33.526210  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:30:33.538699  121308 mustload.go:65] Loading cluster: ha-503856
	I0819 11:30:33.538932  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:33.539195  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:33.539224  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:33.554159  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0819 11:30:33.554634  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:33.555117  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:33.555136  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:33.555462  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:33.555695  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:30:33.557243  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:30:33.557531  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:33.557565  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:33.573352  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0819 11:30:33.573845  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:33.574317  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:33.574342  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:33.574702  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:33.574892  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:30:33.575054  121308 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856 for IP: 192.168.39.183
	I0819 11:30:33.575066  121308 certs.go:194] generating shared ca certs ...
	I0819 11:30:33.575080  121308 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:33.575215  121308 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 11:30:33.575253  121308 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 11:30:33.575262  121308 certs.go:256] generating profile certs ...
	I0819 11:30:33.575330  121308 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key
	I0819 11:30:33.575356  121308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.bedf2fd4
	I0819 11:30:33.575371  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.bedf2fd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.183 192.168.39.254]
	I0819 11:30:33.624010  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.bedf2fd4 ...
	I0819 11:30:33.624041  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.bedf2fd4: {Name:mke27ab3fb040d48d7c1cc01e78d7e4a453c8d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:33.624230  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.bedf2fd4 ...
	I0819 11:30:33.624247  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.bedf2fd4: {Name:mkc1e1747687a6a505ff57a429911599db31ccfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:33.624345  121308 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.bedf2fd4 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt
	I0819 11:30:33.624501  121308 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.bedf2fd4 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key
	I0819 11:30:33.624625  121308 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key
	I0819 11:30:33.624642  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 11:30:33.624655  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 11:30:33.624668  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 11:30:33.624679  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 11:30:33.624692  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 11:30:33.624705  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 11:30:33.624717  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 11:30:33.624727  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 11:30:33.624778  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 11:30:33.624820  121308 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 11:30:33.624830  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:30:33.624852  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:30:33.624873  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:30:33.624896  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 11:30:33.624935  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:30:33.624963  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 11:30:33.624976  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:30:33.624989  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 11:30:33.625021  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:30:33.627912  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:33.628320  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:30:33.628348  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:33.628482  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:30:33.628706  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:30:33.628840  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:30:33.628961  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:30:33.704219  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 11:30:33.708823  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 11:30:33.720378  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 11:30:33.724563  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 11:30:33.735612  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 11:30:33.739497  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 11:30:33.750157  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 11:30:33.754138  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 11:30:33.764679  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 11:30:33.768785  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 11:30:33.779059  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 11:30:33.782880  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 11:30:33.792984  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:30:33.818030  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:30:33.843321  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:30:33.866945  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:30:33.889780  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 11:30:33.912987  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:30:33.936231  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:30:33.959185  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:30:33.983213  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 11:30:34.007518  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:30:34.031985  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 11:30:34.056363  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 11:30:34.072884  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 11:30:34.089426  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 11:30:34.105655  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 11:30:34.121521  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 11:30:34.137617  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 11:30:34.153706  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 11:30:34.170550  121308 ssh_runner.go:195] Run: openssl version
	I0819 11:30:34.176187  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 11:30:34.187041  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 11:30:34.191426  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 11:30:34.191490  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 11:30:34.197138  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:30:34.208036  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:30:34.218818  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:30:34.223221  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:30:34.223308  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:30:34.228830  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:30:34.239404  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 11:30:34.250261  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 11:30:34.254657  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 11:30:34.254718  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 11:30:34.260583  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 11:30:34.272616  121308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:30:34.276768  121308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:30:34.276825  121308 kubeadm.go:934] updating node {m02 192.168.39.183 8443 v1.31.0 crio true true} ...
	I0819 11:30:34.276909  121308 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-503856-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:30:34.276932  121308 kube-vip.go:115] generating kube-vip config ...
	I0819 11:30:34.276969  121308 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 11:30:34.294422  121308 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 11:30:34.294499  121308 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 11:30:34.294587  121308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:30:34.304718  121308 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 11:30:34.304797  121308 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 11:30:34.314847  121308 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 11:30:34.314867  121308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 11:30:34.314886  121308 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 11:30:34.314894  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 11:30:34.314971  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 11:30:34.319847  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 11:30:34.319891  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 11:30:35.010437  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 11:30:35.010521  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 11:30:35.015216  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 11:30:35.015258  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 11:30:36.967376  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:30:36.982016  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 11:30:36.982125  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 11:30:36.986462  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 11:30:36.986505  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 11:30:37.289237  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 11:30:37.298490  121308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 11:30:37.315129  121308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:30:37.332266  121308 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 11:30:37.349240  121308 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 11:30:37.353180  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:30:37.365239  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:30:37.485693  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:30:37.502177  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:30:37.502548  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:37.502587  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:37.517697  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0819 11:30:37.518244  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:37.518757  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:37.518779  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:37.519088  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:37.519270  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:30:37.519420  121308 start.go:317] joinCluster: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:37.519546  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 11:30:37.519569  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:30:37.522799  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:37.523291  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:30:37.523322  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:37.523498  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:30:37.523666  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:30:37.523837  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:30:37.524029  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:30:37.659431  121308 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:30:37.659495  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tahy39.ibnafofoxyrqjcwr --discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-503856-m02 --control-plane --apiserver-advertise-address=192.168.39.183 --apiserver-bind-port=8443"
	I0819 11:30:58.089041  121308 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tahy39.ibnafofoxyrqjcwr --discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-503856-m02 --control-plane --apiserver-advertise-address=192.168.39.183 --apiserver-bind-port=8443": (20.429515989s)
	I0819 11:30:58.089098  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 11:30:58.629851  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-503856-m02 minikube.k8s.io/updated_at=2024_08_19T11_30_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=ha-503856 minikube.k8s.io/primary=false
	I0819 11:30:58.742551  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-503856-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 11:30:58.868205  121308 start.go:319] duration metric: took 21.348781814s to joinCluster
	I0819 11:30:58.868289  121308 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:30:58.868567  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:58.869796  121308 out.go:177] * Verifying Kubernetes components...
	I0819 11:30:58.870853  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:30:59.107866  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:30:59.149279  121308 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:30:59.149724  121308 kapi.go:59] client config for ha-503856: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt", KeyFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key", CAFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 11:30:59.149883  121308 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0819 11:30:59.150229  121308 node_ready.go:35] waiting up to 6m0s for node "ha-503856-m02" to be "Ready" ...
	I0819 11:30:59.150352  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:30:59.150365  121308 round_trippers.go:469] Request Headers:
	I0819 11:30:59.150377  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:30:59.150386  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:30:59.162160  121308 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 11:30:59.651159  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:30:59.651190  121308 round_trippers.go:469] Request Headers:
	I0819 11:30:59.651202  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:30:59.651207  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:30:59.655026  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:00.150666  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:00.150695  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:00.150707  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:00.150715  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:00.155129  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:00.651313  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:00.651336  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:00.651345  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:00.651349  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:00.654727  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:01.150538  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:01.150562  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:01.150576  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:01.150582  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:01.153748  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:01.154496  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:01.650881  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:01.650909  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:01.650923  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:01.650929  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:01.654267  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:02.150456  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:02.150483  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:02.150491  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:02.150495  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:02.154067  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:02.651456  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:02.651479  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:02.651489  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:02.651493  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:02.654896  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:03.151338  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:03.151362  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:03.151371  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:03.151377  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:03.156263  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:03.156754  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:03.651095  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:03.651119  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:03.651127  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:03.651132  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:03.654480  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:04.151447  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:04.151469  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:04.151477  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:04.151480  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:04.154732  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:04.650480  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:04.650504  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:04.650515  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:04.650520  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:04.653989  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:05.151060  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:05.151093  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:05.151102  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:05.151107  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:05.155258  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:05.651366  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:05.651390  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:05.651398  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:05.651402  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:05.654773  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:05.655249  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:06.150660  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:06.150685  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:06.150696  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:06.150701  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:06.157514  121308 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 11:31:06.650708  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:06.650733  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:06.650741  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:06.650746  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:06.653917  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:07.150807  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:07.150832  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:07.150840  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:07.150846  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:07.154267  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:07.651168  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:07.651193  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:07.651202  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:07.651207  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:07.654361  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:08.150812  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:08.150842  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:08.150855  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:08.150860  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:08.154188  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:08.154699  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:08.651213  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:08.651236  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:08.651245  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:08.651248  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:08.658911  121308 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 11:31:09.150755  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:09.150787  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:09.150795  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:09.150799  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:09.274908  121308 round_trippers.go:574] Response Status: 200 OK in 124 milliseconds
	I0819 11:31:09.650556  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:09.650582  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:09.650590  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:09.650596  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:09.653939  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:10.151431  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:10.151459  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:10.151469  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:10.151474  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:10.154606  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:10.155127  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:10.650536  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:10.650562  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:10.650571  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:10.650575  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:10.654012  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:11.150877  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:11.150902  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:11.150911  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:11.150915  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:11.154176  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:11.651206  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:11.651229  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:11.651237  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:11.651240  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:11.654428  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:12.150531  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:12.150555  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:12.150563  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:12.150568  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:12.154577  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:12.155213  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:12.650918  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:12.650944  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:12.650954  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:12.650959  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:12.654346  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:13.151261  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:13.151283  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:13.151291  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:13.151296  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:13.154547  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:13.650503  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:13.650528  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:13.650540  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:13.650553  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:13.653918  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:14.150682  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:14.150705  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:14.150713  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:14.150717  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:14.154249  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:14.651367  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:14.651392  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:14.651401  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:14.651407  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:14.654690  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:14.655212  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:15.150779  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:15.150803  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:15.150812  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:15.150818  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:15.154018  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:15.651095  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:15.651120  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:15.651128  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:15.651132  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:15.654842  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.150783  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:16.150813  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.150824  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.150831  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.154407  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.154962  121308 node_ready.go:49] node "ha-503856-m02" has status "Ready":"True"
	I0819 11:31:16.154985  121308 node_ready.go:38] duration metric: took 17.004735248s for node "ha-503856-m02" to be "Ready" ...
	I0819 11:31:16.154996  121308 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:31:16.155096  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:16.155107  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.155122  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.155129  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.158937  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.165497  121308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.165598  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2jdlw
	I0819 11:31:16.165607  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.165615  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.165620  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.168516  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:16.169237  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.169254  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.169263  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.169270  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.171806  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:16.172383  121308 pod_ready.go:93] pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.172403  121308 pod_ready.go:82] duration metric: took 6.87663ms for pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.172413  121308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.172469  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-5dbrz
	I0819 11:31:16.172477  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.172484  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.172489  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.174739  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:16.175483  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.175502  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.175510  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.175518  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.179447  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.180253  121308 pod_ready.go:93] pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.180285  121308 pod_ready.go:82] duration metric: took 7.864672ms for pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.180308  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.180389  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856
	I0819 11:31:16.180400  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.180410  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.180419  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.182976  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:16.183903  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.183922  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.183933  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.183942  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.186985  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.187785  121308 pod_ready.go:93] pod "etcd-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.187805  121308 pod_ready.go:82] duration metric: took 7.485597ms for pod "etcd-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.187819  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.187888  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856-m02
	I0819 11:31:16.187898  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.187910  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.187917  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.192410  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:16.193005  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:16.193023  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.193031  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.193034  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.196077  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.196798  121308 pod_ready.go:93] pod "etcd-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.196825  121308 pod_ready.go:82] duration metric: took 8.996105ms for pod "etcd-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.196847  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.351291  121308 request.go:632] Waited for 154.366111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856
	I0819 11:31:16.351404  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856
	I0819 11:31:16.351416  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.351430  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.351437  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.354928  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.550888  121308 request.go:632] Waited for 195.31773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.550974  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.550979  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.550987  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.550993  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.554225  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.554734  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.554759  121308 pod_ready.go:82] duration metric: took 357.898517ms for pod "kube-apiserver-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.554772  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.750863  121308 request.go:632] Waited for 195.998191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m02
	I0819 11:31:16.750924  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m02
	I0819 11:31:16.750930  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.750944  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.750948  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.754344  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.951335  121308 request.go:632] Waited for 196.35381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:16.951399  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:16.951404  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.951412  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.951416  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.954688  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.955119  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.955144  121308 pod_ready.go:82] duration metric: took 400.364836ms for pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.955154  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.151352  121308 request.go:632] Waited for 196.120337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856
	I0819 11:31:17.151448  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856
	I0819 11:31:17.151455  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.151466  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.151474  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.160599  121308 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 11:31:17.351367  121308 request.go:632] Waited for 190.030222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:17.351446  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:17.351452  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.351460  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.351463  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.354601  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:17.355202  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:17.355226  121308 pod_ready.go:82] duration metric: took 400.064759ms for pod "kube-controller-manager-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.355241  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.551783  121308 request.go:632] Waited for 196.422792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m02
	I0819 11:31:17.551843  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m02
	I0819 11:31:17.551849  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.551856  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.551860  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.555327  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:17.751515  121308 request.go:632] Waited for 195.387334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:17.751591  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:17.751599  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.751609  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.751615  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.755043  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:17.755640  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:17.755665  121308 pod_ready.go:82] duration metric: took 400.408914ms for pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.755678  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d6zw9" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.951776  121308 request.go:632] Waited for 195.987415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6zw9
	I0819 11:31:17.951841  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6zw9
	I0819 11:31:17.951846  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.951854  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.951858  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.955056  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.151229  121308 request.go:632] Waited for 195.547001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:18.151317  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:18.151324  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.151334  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.151341  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.154145  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:18.154647  121308 pod_ready.go:93] pod "kube-proxy-d6zw9" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:18.154667  121308 pod_ready.go:82] duration metric: took 398.981566ms for pod "kube-proxy-d6zw9" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.154677  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2f6h" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.350839  121308 request.go:632] Waited for 196.063612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2f6h
	I0819 11:31:18.350909  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2f6h
	I0819 11:31:18.350914  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.350922  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.350927  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.354241  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.551204  121308 request.go:632] Waited for 196.370534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:18.551264  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:18.551269  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.551278  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.551282  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.554393  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.554898  121308 pod_ready.go:93] pod "kube-proxy-j2f6h" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:18.554920  121308 pod_ready.go:82] duration metric: took 400.236586ms for pod "kube-proxy-j2f6h" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.554934  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.751810  121308 request.go:632] Waited for 196.801696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856
	I0819 11:31:18.751869  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856
	I0819 11:31:18.751874  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.751882  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.751888  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.755305  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.951310  121308 request.go:632] Waited for 195.40754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:18.951382  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:18.951388  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.951395  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.951401  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.954645  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.955169  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:18.955187  121308 pod_ready.go:82] duration metric: took 400.245984ms for pod "kube-scheduler-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.955199  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:19.151310  121308 request.go:632] Waited for 196.038831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m02
	I0819 11:31:19.151387  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m02
	I0819 11:31:19.151395  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.151403  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.151406  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.154591  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:19.351614  121308 request.go:632] Waited for 196.434555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:19.351693  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:19.351699  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.351706  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.351709  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.354955  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:19.355610  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:19.355629  121308 pod_ready.go:82] duration metric: took 400.422835ms for pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:19.355640  121308 pod_ready.go:39] duration metric: took 3.200617934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:31:19.355656  121308 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:31:19.355710  121308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:31:19.369644  121308 api_server.go:72] duration metric: took 20.501314219s to wait for apiserver process to appear ...
	I0819 11:31:19.369681  121308 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:31:19.369706  121308 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0819 11:31:19.374147  121308 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0819 11:31:19.374237  121308 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0819 11:31:19.374249  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.374260  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.374266  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.375027  121308 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 11:31:19.375130  121308 api_server.go:141] control plane version: v1.31.0
	I0819 11:31:19.375149  121308 api_server.go:131] duration metric: took 5.461132ms to wait for apiserver health ...
	I0819 11:31:19.375157  121308 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 11:31:19.551540  121308 request.go:632] Waited for 176.300465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:19.551635  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:19.551643  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.551650  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.551655  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.556172  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:19.562111  121308 system_pods.go:59] 17 kube-system pods found
	I0819 11:31:19.562148  121308 system_pods.go:61] "coredns-6f6b679f8f-2jdlw" [ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd] Running
	I0819 11:31:19.562153  121308 system_pods.go:61] "coredns-6f6b679f8f-5dbrz" [5530828e-1061-434c-ad2f-80847f3ab171] Running
	I0819 11:31:19.562157  121308 system_pods.go:61] "etcd-ha-503856" [b8932b07-bc71-4d14-bc4c-a323aa900891] Running
	I0819 11:31:19.562160  121308 system_pods.go:61] "etcd-ha-503856-m02" [7c495867-e51d-4100-b0d8-2794e45a18c4] Running
	I0819 11:31:19.562163  121308 system_pods.go:61] "kindnet-rnjwj" [1a6e4b0d-f3f2-45e3-b66e-b0457ba61723] Running
	I0819 11:31:19.562166  121308 system_pods.go:61] "kindnet-st2mx" [99e7c93b-40a9-4902-b1a5-5a6bcc55735c] Running
	I0819 11:31:19.562169  121308 system_pods.go:61] "kube-apiserver-ha-503856" [bdea9580-2d12-4e91-acbd-5a5e08f5637c] Running
	I0819 11:31:19.562172  121308 system_pods.go:61] "kube-apiserver-ha-503856-m02" [a1d5950d-50bc-42e8-b432-27425aa4b80d] Running
	I0819 11:31:19.562175  121308 system_pods.go:61] "kube-controller-manager-ha-503856" [36c9c0c5-0b9e-4fce-a34f-bf1c21590af4] Running
	I0819 11:31:19.562179  121308 system_pods.go:61] "kube-controller-manager-ha-503856-m02" [a58cf93b-47a4-4cb7-80e1-afb525b1a2b2] Running
	I0819 11:31:19.562182  121308 system_pods.go:61] "kube-proxy-d6zw9" [f8054009-c06a-4ccc-b6c4-22e0f6bb632a] Running
	I0819 11:31:19.562184  121308 system_pods.go:61] "kube-proxy-j2f6h" [e9623c18-7b96-49b5-8cc6-6ea700eec47e] Running
	I0819 11:31:19.562187  121308 system_pods.go:61] "kube-scheduler-ha-503856" [2c8c7e78-ded0-47ff-8720-b1c36c9123c6] Running
	I0819 11:31:19.562190  121308 system_pods.go:61] "kube-scheduler-ha-503856-m02" [6f51735c-0f3e-49f8-aff7-c6c485e0e653] Running
	I0819 11:31:19.562193  121308 system_pods.go:61] "kube-vip-ha-503856" [a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8] Running
	I0819 11:31:19.562197  121308 system_pods.go:61] "kube-vip-ha-503856-m02" [5d66ea23-6878-403f-88df-94bf42ad5800] Running
	I0819 11:31:19.562200  121308 system_pods.go:61] "storage-provisioner" [4c212413-ac90-45fb-92de-bfd9e9115540] Running
	I0819 11:31:19.562206  121308 system_pods.go:74] duration metric: took 187.040317ms to wait for pod list to return data ...
	I0819 11:31:19.562216  121308 default_sa.go:34] waiting for default service account to be created ...
	I0819 11:31:19.751666  121308 request.go:632] Waited for 189.372257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0819 11:31:19.751738  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0819 11:31:19.751744  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.751752  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.751757  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.755027  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:19.755270  121308 default_sa.go:45] found service account: "default"
	I0819 11:31:19.755290  121308 default_sa.go:55] duration metric: took 193.066823ms for default service account to be created ...
	I0819 11:31:19.755300  121308 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 11:31:19.951772  121308 request.go:632] Waited for 196.382531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:19.951856  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:19.951861  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.951872  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.951875  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.956631  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:19.962576  121308 system_pods.go:86] 17 kube-system pods found
	I0819 11:31:19.962609  121308 system_pods.go:89] "coredns-6f6b679f8f-2jdlw" [ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd] Running
	I0819 11:31:19.962615  121308 system_pods.go:89] "coredns-6f6b679f8f-5dbrz" [5530828e-1061-434c-ad2f-80847f3ab171] Running
	I0819 11:31:19.962619  121308 system_pods.go:89] "etcd-ha-503856" [b8932b07-bc71-4d14-bc4c-a323aa900891] Running
	I0819 11:31:19.962623  121308 system_pods.go:89] "etcd-ha-503856-m02" [7c495867-e51d-4100-b0d8-2794e45a18c4] Running
	I0819 11:31:19.962627  121308 system_pods.go:89] "kindnet-rnjwj" [1a6e4b0d-f3f2-45e3-b66e-b0457ba61723] Running
	I0819 11:31:19.962630  121308 system_pods.go:89] "kindnet-st2mx" [99e7c93b-40a9-4902-b1a5-5a6bcc55735c] Running
	I0819 11:31:19.962634  121308 system_pods.go:89] "kube-apiserver-ha-503856" [bdea9580-2d12-4e91-acbd-5a5e08f5637c] Running
	I0819 11:31:19.962637  121308 system_pods.go:89] "kube-apiserver-ha-503856-m02" [a1d5950d-50bc-42e8-b432-27425aa4b80d] Running
	I0819 11:31:19.962641  121308 system_pods.go:89] "kube-controller-manager-ha-503856" [36c9c0c5-0b9e-4fce-a34f-bf1c21590af4] Running
	I0819 11:31:19.962644  121308 system_pods.go:89] "kube-controller-manager-ha-503856-m02" [a58cf93b-47a4-4cb7-80e1-afb525b1a2b2] Running
	I0819 11:31:19.962647  121308 system_pods.go:89] "kube-proxy-d6zw9" [f8054009-c06a-4ccc-b6c4-22e0f6bb632a] Running
	I0819 11:31:19.962650  121308 system_pods.go:89] "kube-proxy-j2f6h" [e9623c18-7b96-49b5-8cc6-6ea700eec47e] Running
	I0819 11:31:19.962653  121308 system_pods.go:89] "kube-scheduler-ha-503856" [2c8c7e78-ded0-47ff-8720-b1c36c9123c6] Running
	I0819 11:31:19.962655  121308 system_pods.go:89] "kube-scheduler-ha-503856-m02" [6f51735c-0f3e-49f8-aff7-c6c485e0e653] Running
	I0819 11:31:19.962658  121308 system_pods.go:89] "kube-vip-ha-503856" [a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8] Running
	I0819 11:31:19.962661  121308 system_pods.go:89] "kube-vip-ha-503856-m02" [5d66ea23-6878-403f-88df-94bf42ad5800] Running
	I0819 11:31:19.962664  121308 system_pods.go:89] "storage-provisioner" [4c212413-ac90-45fb-92de-bfd9e9115540] Running
	I0819 11:31:19.962669  121308 system_pods.go:126] duration metric: took 207.363242ms to wait for k8s-apps to be running ...
	I0819 11:31:19.962677  121308 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 11:31:19.962731  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:31:19.976870  121308 system_svc.go:56] duration metric: took 14.172102ms WaitForService to wait for kubelet
	I0819 11:31:19.976907  121308 kubeadm.go:582] duration metric: took 21.1085779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:31:19.976927  121308 node_conditions.go:102] verifying NodePressure condition ...
	I0819 11:31:20.150814  121308 request.go:632] Waited for 173.793312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0819 11:31:20.150901  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0819 11:31:20.150909  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:20.150921  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:20.150933  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:20.155101  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:20.156014  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:31:20.156041  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:31:20.156052  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:31:20.156057  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:31:20.156061  121308 node_conditions.go:105] duration metric: took 179.129515ms to run NodePressure ...
	I0819 11:31:20.156076  121308 start.go:241] waiting for startup goroutines ...
	I0819 11:31:20.156103  121308 start.go:255] writing updated cluster config ...
	I0819 11:31:20.157909  121308 out.go:201] 
	I0819 11:31:20.159418  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:31:20.159527  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:31:20.161292  121308 out.go:177] * Starting "ha-503856-m03" control-plane node in "ha-503856" cluster
	I0819 11:31:20.162693  121308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:31:20.162731  121308 cache.go:56] Caching tarball of preloaded images
	I0819 11:31:20.162861  121308 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:31:20.162873  121308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:31:20.162976  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:31:20.163171  121308 start.go:360] acquireMachinesLock for ha-503856-m03: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:31:20.163214  121308 start.go:364] duration metric: took 22.017µs to acquireMachinesLock for "ha-503856-m03"
	I0819 11:31:20.163233  121308 start.go:93] Provisioning new machine with config: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:31:20.163331  121308 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 11:31:20.165351  121308 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:31:20.165454  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:31:20.165502  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:31:20.181094  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I0819 11:31:20.181520  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:31:20.182029  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:31:20.182048  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:31:20.182422  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:31:20.182743  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetMachineName
	I0819 11:31:20.183067  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:20.183273  121308 start.go:159] libmachine.API.Create for "ha-503856" (driver="kvm2")
	I0819 11:31:20.183308  121308 client.go:168] LocalClient.Create starting
	I0819 11:31:20.183352  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 11:31:20.183401  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:31:20.183423  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:31:20.183489  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 11:31:20.183533  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:31:20.183550  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:31:20.183577  121308 main.go:141] libmachine: Running pre-create checks...
	I0819 11:31:20.183590  121308 main.go:141] libmachine: (ha-503856-m03) Calling .PreCreateCheck
	I0819 11:31:20.183792  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetConfigRaw
	I0819 11:31:20.184304  121308 main.go:141] libmachine: Creating machine...
	I0819 11:31:20.184324  121308 main.go:141] libmachine: (ha-503856-m03) Calling .Create
	I0819 11:31:20.184512  121308 main.go:141] libmachine: (ha-503856-m03) Creating KVM machine...
	I0819 11:31:20.185960  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found existing default KVM network
	I0819 11:31:20.186120  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found existing private KVM network mk-ha-503856
	I0819 11:31:20.186273  121308 main.go:141] libmachine: (ha-503856-m03) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03 ...
	I0819 11:31:20.186298  121308 main.go:141] libmachine: (ha-503856-m03) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:31:20.186377  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:20.186275  122066 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:31:20.186508  121308 main.go:141] libmachine: (ha-503856-m03) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:31:20.443661  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:20.443500  122066 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa...
	I0819 11:31:20.771388  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:20.771264  122066 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/ha-503856-m03.rawdisk...
	I0819 11:31:20.771422  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Writing magic tar header
	I0819 11:31:20.771436  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Writing SSH key tar header
	I0819 11:31:20.771447  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:20.771396  122066 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03 ...
	I0819 11:31:20.771572  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03
	I0819 11:31:20.771599  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 11:31:20.771617  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03 (perms=drwx------)
	I0819 11:31:20.771632  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 11:31:20.771646  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 11:31:20.771660  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 11:31:20.771670  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 11:31:20.771682  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:31:20.771697  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 11:31:20.771706  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 11:31:20.771715  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 11:31:20.771746  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home
	I0819 11:31:20.771764  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 11:31:20.771775  121308 main.go:141] libmachine: (ha-503856-m03) Creating domain...
	I0819 11:31:20.771788  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Skipping /home - not owner
	I0819 11:31:20.772777  121308 main.go:141] libmachine: (ha-503856-m03) define libvirt domain using xml: 
	I0819 11:31:20.772803  121308 main.go:141] libmachine: (ha-503856-m03) <domain type='kvm'>
	I0819 11:31:20.772813  121308 main.go:141] libmachine: (ha-503856-m03)   <name>ha-503856-m03</name>
	I0819 11:31:20.772821  121308 main.go:141] libmachine: (ha-503856-m03)   <memory unit='MiB'>2200</memory>
	I0819 11:31:20.772833  121308 main.go:141] libmachine: (ha-503856-m03)   <vcpu>2</vcpu>
	I0819 11:31:20.772846  121308 main.go:141] libmachine: (ha-503856-m03)   <features>
	I0819 11:31:20.772862  121308 main.go:141] libmachine: (ha-503856-m03)     <acpi/>
	I0819 11:31:20.772875  121308 main.go:141] libmachine: (ha-503856-m03)     <apic/>
	I0819 11:31:20.772915  121308 main.go:141] libmachine: (ha-503856-m03)     <pae/>
	I0819 11:31:20.772943  121308 main.go:141] libmachine: (ha-503856-m03)     
	I0819 11:31:20.772955  121308 main.go:141] libmachine: (ha-503856-m03)   </features>
	I0819 11:31:20.772964  121308 main.go:141] libmachine: (ha-503856-m03)   <cpu mode='host-passthrough'>
	I0819 11:31:20.772974  121308 main.go:141] libmachine: (ha-503856-m03)   
	I0819 11:31:20.772984  121308 main.go:141] libmachine: (ha-503856-m03)   </cpu>
	I0819 11:31:20.772993  121308 main.go:141] libmachine: (ha-503856-m03)   <os>
	I0819 11:31:20.773003  121308 main.go:141] libmachine: (ha-503856-m03)     <type>hvm</type>
	I0819 11:31:20.773011  121308 main.go:141] libmachine: (ha-503856-m03)     <boot dev='cdrom'/>
	I0819 11:31:20.773025  121308 main.go:141] libmachine: (ha-503856-m03)     <boot dev='hd'/>
	I0819 11:31:20.773039  121308 main.go:141] libmachine: (ha-503856-m03)     <bootmenu enable='no'/>
	I0819 11:31:20.773049  121308 main.go:141] libmachine: (ha-503856-m03)   </os>
	I0819 11:31:20.773070  121308 main.go:141] libmachine: (ha-503856-m03)   <devices>
	I0819 11:31:20.773083  121308 main.go:141] libmachine: (ha-503856-m03)     <disk type='file' device='cdrom'>
	I0819 11:31:20.773119  121308 main.go:141] libmachine: (ha-503856-m03)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/boot2docker.iso'/>
	I0819 11:31:20.773145  121308 main.go:141] libmachine: (ha-503856-m03)       <target dev='hdc' bus='scsi'/>
	I0819 11:31:20.773172  121308 main.go:141] libmachine: (ha-503856-m03)       <readonly/>
	I0819 11:31:20.773197  121308 main.go:141] libmachine: (ha-503856-m03)     </disk>
	I0819 11:31:20.773212  121308 main.go:141] libmachine: (ha-503856-m03)     <disk type='file' device='disk'>
	I0819 11:31:20.773222  121308 main.go:141] libmachine: (ha-503856-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 11:31:20.773238  121308 main.go:141] libmachine: (ha-503856-m03)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/ha-503856-m03.rawdisk'/>
	I0819 11:31:20.773248  121308 main.go:141] libmachine: (ha-503856-m03)       <target dev='hda' bus='virtio'/>
	I0819 11:31:20.773254  121308 main.go:141] libmachine: (ha-503856-m03)     </disk>
	I0819 11:31:20.773261  121308 main.go:141] libmachine: (ha-503856-m03)     <interface type='network'>
	I0819 11:31:20.773269  121308 main.go:141] libmachine: (ha-503856-m03)       <source network='mk-ha-503856'/>
	I0819 11:31:20.773285  121308 main.go:141] libmachine: (ha-503856-m03)       <model type='virtio'/>
	I0819 11:31:20.773295  121308 main.go:141] libmachine: (ha-503856-m03)     </interface>
	I0819 11:31:20.773305  121308 main.go:141] libmachine: (ha-503856-m03)     <interface type='network'>
	I0819 11:31:20.773315  121308 main.go:141] libmachine: (ha-503856-m03)       <source network='default'/>
	I0819 11:31:20.773326  121308 main.go:141] libmachine: (ha-503856-m03)       <model type='virtio'/>
	I0819 11:31:20.773334  121308 main.go:141] libmachine: (ha-503856-m03)     </interface>
	I0819 11:31:20.773344  121308 main.go:141] libmachine: (ha-503856-m03)     <serial type='pty'>
	I0819 11:31:20.773353  121308 main.go:141] libmachine: (ha-503856-m03)       <target port='0'/>
	I0819 11:31:20.773364  121308 main.go:141] libmachine: (ha-503856-m03)     </serial>
	I0819 11:31:20.773380  121308 main.go:141] libmachine: (ha-503856-m03)     <console type='pty'>
	I0819 11:31:20.773388  121308 main.go:141] libmachine: (ha-503856-m03)       <target type='serial' port='0'/>
	I0819 11:31:20.773396  121308 main.go:141] libmachine: (ha-503856-m03)     </console>
	I0819 11:31:20.773404  121308 main.go:141] libmachine: (ha-503856-m03)     <rng model='virtio'>
	I0819 11:31:20.773418  121308 main.go:141] libmachine: (ha-503856-m03)       <backend model='random'>/dev/random</backend>
	I0819 11:31:20.773424  121308 main.go:141] libmachine: (ha-503856-m03)     </rng>
	I0819 11:31:20.773431  121308 main.go:141] libmachine: (ha-503856-m03)     
	I0819 11:31:20.773441  121308 main.go:141] libmachine: (ha-503856-m03)     
	I0819 11:31:20.773450  121308 main.go:141] libmachine: (ha-503856-m03)   </devices>
	I0819 11:31:20.773460  121308 main.go:141] libmachine: (ha-503856-m03) </domain>
	I0819 11:31:20.773472  121308 main.go:141] libmachine: (ha-503856-m03) 
	I0819 11:31:20.780669  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:61:14:39 in network default
	I0819 11:31:20.781385  121308 main.go:141] libmachine: (ha-503856-m03) Ensuring networks are active...
	I0819 11:31:20.781407  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:20.782170  121308 main.go:141] libmachine: (ha-503856-m03) Ensuring network default is active
	I0819 11:31:20.782550  121308 main.go:141] libmachine: (ha-503856-m03) Ensuring network mk-ha-503856 is active
	I0819 11:31:20.782945  121308 main.go:141] libmachine: (ha-503856-m03) Getting domain xml...
	I0819 11:31:20.783585  121308 main.go:141] libmachine: (ha-503856-m03) Creating domain...
	I0819 11:31:22.039720  121308 main.go:141] libmachine: (ha-503856-m03) Waiting to get IP...
	I0819 11:31:22.040528  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:22.040945  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:22.040966  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:22.040924  122066 retry.go:31] will retry after 197.841944ms: waiting for machine to come up
	I0819 11:31:22.241064  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:22.241577  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:22.241600  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:22.241539  122066 retry.go:31] will retry after 324.078324ms: waiting for machine to come up
	I0819 11:31:22.566780  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:22.567224  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:22.567256  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:22.567172  122066 retry.go:31] will retry after 402.796459ms: waiting for machine to come up
	I0819 11:31:22.971719  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:22.972183  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:22.972213  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:22.972138  122066 retry.go:31] will retry after 566.878257ms: waiting for machine to come up
	I0819 11:31:23.541156  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:23.541766  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:23.541790  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:23.541688  122066 retry.go:31] will retry after 628.56629ms: waiting for machine to come up
	I0819 11:31:24.171757  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:24.172252  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:24.172277  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:24.172176  122066 retry.go:31] will retry after 885.590988ms: waiting for machine to come up
	I0819 11:31:25.059781  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:25.060341  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:25.060380  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:25.060286  122066 retry.go:31] will retry after 741.397234ms: waiting for machine to come up
	I0819 11:31:25.803145  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:25.803550  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:25.803590  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:25.803518  122066 retry.go:31] will retry after 991.895752ms: waiting for machine to come up
	I0819 11:31:26.796731  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:26.797190  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:26.797212  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:26.797153  122066 retry.go:31] will retry after 1.506964408s: waiting for machine to come up
	I0819 11:31:28.305505  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:28.305948  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:28.305985  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:28.305898  122066 retry.go:31] will retry after 1.478403756s: waiting for machine to come up
	I0819 11:31:29.785666  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:29.786262  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:29.786298  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:29.786206  122066 retry.go:31] will retry after 2.112030077s: waiting for machine to come up
	I0819 11:31:31.900436  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:31.900863  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:31.900891  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:31.900814  122066 retry.go:31] will retry after 3.559996961s: waiting for machine to come up
	I0819 11:31:35.462660  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:35.463208  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:35.463235  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:35.463133  122066 retry.go:31] will retry after 4.366334624s: waiting for machine to come up
	I0819 11:31:39.834601  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:39.835050  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:39.835081  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:39.835002  122066 retry.go:31] will retry after 3.604040354s: waiting for machine to come up
	I0819 11:31:43.440818  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.441291  121308 main.go:141] libmachine: (ha-503856-m03) Found IP for machine: 192.168.39.122
	I0819 11:31:43.441316  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has current primary IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.441322  121308 main.go:141] libmachine: (ha-503856-m03) Reserving static IP address...
	I0819 11:31:43.441667  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find host DHCP lease matching {name: "ha-503856-m03", mac: "52:54:00:10:1f:39", ip: "192.168.39.122"} in network mk-ha-503856
	I0819 11:31:43.521399  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Getting to WaitForSSH function...
	I0819 11:31:43.521430  121308 main.go:141] libmachine: (ha-503856-m03) Reserved static IP address: 192.168.39.122
	I0819 11:31:43.521441  121308 main.go:141] libmachine: (ha-503856-m03) Waiting for SSH to be available...
	I0819 11:31:43.524277  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.524679  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.524710  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.524833  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Using SSH client type: external
	I0819 11:31:43.524859  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa (-rw-------)
	I0819 11:31:43.524915  121308 main.go:141] libmachine: (ha-503856-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.122 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 11:31:43.524934  121308 main.go:141] libmachine: (ha-503856-m03) DBG | About to run SSH command:
	I0819 11:31:43.524948  121308 main.go:141] libmachine: (ha-503856-m03) DBG | exit 0
	I0819 11:31:43.647763  121308 main.go:141] libmachine: (ha-503856-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 11:31:43.648038  121308 main.go:141] libmachine: (ha-503856-m03) KVM machine creation complete!
	I0819 11:31:43.648355  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetConfigRaw
	I0819 11:31:43.648912  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:43.649105  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:43.649255  121308 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 11:31:43.649270  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:31:43.650382  121308 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 11:31:43.650395  121308 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 11:31:43.650401  121308 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 11:31:43.650407  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:43.652705  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.653134  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.653162  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.653304  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:43.653501  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.653653  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.653797  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:43.654047  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:43.654282  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:43.654292  121308 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 11:31:43.754854  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:31:43.754882  121308 main.go:141] libmachine: Detecting the provisioner...
	I0819 11:31:43.754917  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:43.757738  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.758200  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.758232  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.758445  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:43.758674  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.758866  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.759011  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:43.759163  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:43.759354  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:43.759368  121308 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 11:31:43.860452  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 11:31:43.860549  121308 main.go:141] libmachine: found compatible host: buildroot
	I0819 11:31:43.860559  121308 main.go:141] libmachine: Provisioning with buildroot...
	I0819 11:31:43.860567  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetMachineName
	I0819 11:31:43.860864  121308 buildroot.go:166] provisioning hostname "ha-503856-m03"
	I0819 11:31:43.860889  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetMachineName
	I0819 11:31:43.861094  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:43.863700  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.864053  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.864088  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.864221  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:43.864400  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.864595  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.864699  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:43.864833  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:43.865008  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:43.865023  121308 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-503856-m03 && echo "ha-503856-m03" | sudo tee /etc/hostname
	I0819 11:31:43.983047  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856-m03
	
	I0819 11:31:43.983077  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:43.985980  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.986316  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.986342  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.986545  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:43.986757  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.986901  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.987003  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:43.987127  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:43.987343  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:43.987363  121308 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-503856-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-503856-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-503856-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:31:44.096697  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:31:44.096762  121308 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 11:31:44.096786  121308 buildroot.go:174] setting up certificates
	I0819 11:31:44.096797  121308 provision.go:84] configureAuth start
	I0819 11:31:44.096811  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetMachineName
	I0819 11:31:44.097152  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:31:44.099996  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.100366  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.100393  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.100542  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.102766  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.103244  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.103271  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.103409  121308 provision.go:143] copyHostCerts
	I0819 11:31:44.103453  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:31:44.103492  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 11:31:44.103508  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:31:44.103572  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 11:31:44.103643  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:31:44.103664  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 11:31:44.103671  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:31:44.103694  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 11:31:44.103762  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:31:44.103784  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 11:31:44.103790  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:31:44.103814  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 11:31:44.103863  121308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.ha-503856-m03 san=[127.0.0.1 192.168.39.122 ha-503856-m03 localhost minikube]
	I0819 11:31:44.342828  121308 provision.go:177] copyRemoteCerts
	I0819 11:31:44.342889  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:31:44.342928  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.345724  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.346012  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.346037  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.346251  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:44.346456  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.346690  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:44.346823  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:31:44.426457  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 11:31:44.426546  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:31:44.450836  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 11:31:44.450920  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:31:44.475298  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 11:31:44.475386  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 11:31:44.499633  121308 provision.go:87] duration metric: took 402.822967ms to configureAuth
	I0819 11:31:44.499663  121308 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:31:44.499908  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:31:44.499995  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.502493  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.502894  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.502923  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.503087  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:44.503288  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.503478  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.503639  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:44.503836  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:44.504001  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:44.504015  121308 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:31:44.759869  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:31:44.759904  121308 main.go:141] libmachine: Checking connection to Docker...
	I0819 11:31:44.759913  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetURL
	I0819 11:31:44.761138  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Using libvirt version 6000000
	I0819 11:31:44.762898  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.763223  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.763256  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.763359  121308 main.go:141] libmachine: Docker is up and running!
	I0819 11:31:44.763388  121308 main.go:141] libmachine: Reticulating splines...
	I0819 11:31:44.763401  121308 client.go:171] duration metric: took 24.580080005s to LocalClient.Create
	I0819 11:31:44.763429  121308 start.go:167] duration metric: took 24.580158524s to libmachine.API.Create "ha-503856"
	I0819 11:31:44.763441  121308 start.go:293] postStartSetup for "ha-503856-m03" (driver="kvm2")
	I0819 11:31:44.763459  121308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:31:44.763483  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:44.763770  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:31:44.763800  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.765581  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.765834  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.765863  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.766016  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:44.766214  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.766381  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:44.766543  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:31:44.845814  121308 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:31:44.850310  121308 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:31:44.850345  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 11:31:44.850422  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 11:31:44.850499  121308 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 11:31:44.850506  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 11:31:44.850587  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:31:44.859846  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:31:44.883965  121308 start.go:296] duration metric: took 120.503585ms for postStartSetup
	I0819 11:31:44.884033  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetConfigRaw
	I0819 11:31:44.884659  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:31:44.887017  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.887332  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.887356  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.887642  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:31:44.887891  121308 start.go:128] duration metric: took 24.724548392s to createHost
	I0819 11:31:44.887916  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.890207  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.890543  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.890568  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.890750  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:44.890979  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.891181  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.891345  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:44.891520  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:44.891681  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:44.891692  121308 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:31:44.992295  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724067104.969501066
	
	I0819 11:31:44.992331  121308 fix.go:216] guest clock: 1724067104.969501066
	I0819 11:31:44.992344  121308 fix.go:229] Guest: 2024-08-19 11:31:44.969501066 +0000 UTC Remote: 2024-08-19 11:31:44.887905044 +0000 UTC m=+139.901267068 (delta=81.596022ms)
	I0819 11:31:44.992374  121308 fix.go:200] guest clock delta is within tolerance: 81.596022ms
	I0819 11:31:44.992383  121308 start.go:83] releasing machines lock for "ha-503856-m03", held for 24.829158862s
	I0819 11:31:44.992415  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:44.992730  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:31:44.995088  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.995478  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.995506  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.997334  121308 out.go:177] * Found network options:
	I0819 11:31:44.998720  121308 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.183
	W0819 11:31:44.999907  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 11:31:44.999934  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 11:31:44.999950  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:45.000567  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:45.000777  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:45.000881  121308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:31:45.000921  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	W0819 11:31:45.001182  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 11:31:45.001204  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 11:31:45.001264  121308 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:31:45.001284  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:45.003845  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:45.004121  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:45.004149  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:45.004171  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:45.004421  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:45.004637  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:45.004661  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:45.004669  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:45.004782  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:45.004868  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:45.004963  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:45.005023  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:31:45.005056  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:45.005149  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:31:45.236046  121308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:31:45.241806  121308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:31:45.241884  121308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:31:45.257689  121308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:31:45.257720  121308 start.go:495] detecting cgroup driver to use...
	I0819 11:31:45.257795  121308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:31:45.273519  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:31:45.287545  121308 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:31:45.287609  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:31:45.301536  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:31:45.316644  121308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:31:45.427352  121308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:31:45.591657  121308 docker.go:233] disabling docker service ...
	I0819 11:31:45.591772  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:31:45.607168  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:31:45.620964  121308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:31:45.745004  121308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:31:45.882334  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:31:45.897050  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:31:45.916092  121308 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:31:45.916152  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.927078  121308 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:31:45.927150  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.938148  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.949598  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.961672  121308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:31:45.973479  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.984953  121308 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:46.004406  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:46.015695  121308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:31:46.026039  121308 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 11:31:46.026105  121308 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 11:31:46.040369  121308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:31:46.050909  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:31:46.170079  121308 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:31:46.299697  121308 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:31:46.299812  121308 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:31:46.304745  121308 start.go:563] Will wait 60s for crictl version
	I0819 11:31:46.304806  121308 ssh_runner.go:195] Run: which crictl
	I0819 11:31:46.308508  121308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:31:46.349022  121308 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:31:46.349120  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:31:46.377230  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:31:46.409263  121308 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:31:46.410757  121308 out.go:177]   - env NO_PROXY=192.168.39.102
	I0819 11:31:46.412215  121308 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.183
	I0819 11:31:46.413489  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:31:46.416093  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:46.416513  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:46.416546  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:46.416763  121308 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:31:46.420934  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:31:46.433578  121308 mustload.go:65] Loading cluster: ha-503856
	I0819 11:31:46.433821  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:31:46.434096  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:31:46.434146  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:31:46.449241  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42613
	I0819 11:31:46.449690  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:31:46.450172  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:31:46.450195  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:31:46.450552  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:31:46.450766  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:31:46.452298  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:31:46.452612  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:31:46.452652  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:31:46.467366  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0819 11:31:46.467911  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:31:46.468346  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:31:46.468368  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:31:46.468695  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:31:46.468887  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:31:46.469063  121308 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856 for IP: 192.168.39.122
	I0819 11:31:46.469075  121308 certs.go:194] generating shared ca certs ...
	I0819 11:31:46.469096  121308 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:46.469240  121308 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 11:31:46.469292  121308 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 11:31:46.469306  121308 certs.go:256] generating profile certs ...
	I0819 11:31:46.469396  121308 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key
	I0819 11:31:46.469428  121308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.fb95d417
	I0819 11:31:46.469449  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.fb95d417 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.183 192.168.39.122 192.168.39.254]
	I0819 11:31:46.527356  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.fb95d417 ...
	I0819 11:31:46.527391  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.fb95d417: {Name:mk011e7a84b72a1279839beb66c759312559f7e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:46.527581  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.fb95d417 ...
	I0819 11:31:46.527600  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.fb95d417: {Name:mk8decfc934a051f761e55204e12c6734d163b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:46.527698  121308 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.fb95d417 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt
	I0819 11:31:46.527878  121308 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.fb95d417 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key
	I0819 11:31:46.528043  121308 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key
	I0819 11:31:46.528063  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 11:31:46.528083  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 11:31:46.528100  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 11:31:46.528121  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 11:31:46.528140  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 11:31:46.528161  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 11:31:46.528180  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 11:31:46.528199  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 11:31:46.528267  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 11:31:46.528307  121308 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 11:31:46.528321  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:31:46.528366  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:31:46.528399  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:31:46.528434  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 11:31:46.528490  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:31:46.528528  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:31:46.528548  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 11:31:46.528564  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 11:31:46.528608  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:31:46.531806  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:31:46.532332  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:31:46.532358  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:31:46.532560  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:31:46.532781  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:31:46.532939  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:31:46.533073  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:31:46.608111  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 11:31:46.612853  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 11:31:46.625919  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 11:31:46.630396  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 11:31:46.640885  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 11:31:46.645365  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 11:31:46.655993  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 11:31:46.660112  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 11:31:46.672626  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 11:31:46.677154  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 11:31:46.689265  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 11:31:46.693890  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 11:31:46.705709  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:31:46.731797  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:31:46.755443  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:31:46.779069  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:31:46.803408  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 11:31:46.826941  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:31:46.851156  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:31:46.875085  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:31:46.900780  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:31:46.924779  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 11:31:46.948671  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 11:31:46.973787  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 11:31:46.990188  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 11:31:47.007593  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 11:31:47.025567  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 11:31:47.042073  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 11:31:47.058650  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 11:31:47.075435  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 11:31:47.092304  121308 ssh_runner.go:195] Run: openssl version
	I0819 11:31:47.098008  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:31:47.108795  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:31:47.113417  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:31:47.113487  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:31:47.119331  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:31:47.130238  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 11:31:47.141134  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 11:31:47.146656  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 11:31:47.146727  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 11:31:47.153019  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 11:31:47.164063  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 11:31:47.174821  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 11:31:47.179154  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 11:31:47.179226  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 11:31:47.185015  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:31:47.198127  121308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:31:47.202402  121308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:31:47.202492  121308 kubeadm.go:934] updating node {m03 192.168.39.122 8443 v1.31.0 crio true true} ...
	I0819 11:31:47.202591  121308 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-503856-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:31:47.202616  121308 kube-vip.go:115] generating kube-vip config ...
	I0819 11:31:47.202656  121308 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 11:31:47.219835  121308 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 11:31:47.220006  121308 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 11:31:47.220093  121308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:31:47.229761  121308 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 11:31:47.229855  121308 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 11:31:47.239266  121308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 11:31:47.239298  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 11:31:47.239347  121308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 11:31:47.239361  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 11:31:47.239369  121308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 11:31:47.239377  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 11:31:47.239423  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:31:47.239447  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 11:31:47.249350  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 11:31:47.249392  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 11:31:47.262303  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 11:31:47.262362  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 11:31:47.262396  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 11:31:47.262432  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 11:31:47.321220  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 11:31:47.321263  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 11:31:48.135290  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 11:31:48.144839  121308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 11:31:48.161866  121308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:31:48.178862  121308 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 11:31:48.196145  121308 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 11:31:48.200207  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:31:48.212241  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:31:48.331898  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:31:48.356030  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:31:48.356580  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:31:48.356641  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:31:48.373578  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0819 11:31:48.374156  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:31:48.374708  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:31:48.374740  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:31:48.375075  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:31:48.375267  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:31:48.375400  121308 start.go:317] joinCluster: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:31:48.375556  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 11:31:48.375574  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:31:48.378704  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:31:48.379181  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:31:48.379212  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:31:48.379399  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:31:48.379598  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:31:48.379770  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:31:48.379918  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:31:48.518272  121308 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:31:48.518338  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9g1dct.m32llnhjbnztl8nq --discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-503856-m03 --control-plane --apiserver-advertise-address=192.168.39.122 --apiserver-bind-port=8443"
	I0819 11:32:09.865314  121308 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9g1dct.m32llnhjbnztl8nq --discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-503856-m03 --control-plane --apiserver-advertise-address=192.168.39.122 --apiserver-bind-port=8443": (21.346937303s)
	I0819 11:32:09.865368  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 11:32:10.369786  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-503856-m03 minikube.k8s.io/updated_at=2024_08_19T11_32_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=ha-503856 minikube.k8s.io/primary=false
	I0819 11:32:10.496237  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-503856-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 11:32:10.601157  121308 start.go:319] duration metric: took 22.225751351s to joinCluster
	I0819 11:32:10.601245  121308 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:32:10.601611  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:32:10.603100  121308 out.go:177] * Verifying Kubernetes components...
	I0819 11:32:10.604140  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:32:10.877173  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:32:10.909687  121308 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:32:10.909986  121308 kapi.go:59] client config for ha-503856: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt", KeyFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key", CAFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 11:32:10.910062  121308 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0819 11:32:10.910318  121308 node_ready.go:35] waiting up to 6m0s for node "ha-503856-m03" to be "Ready" ...
	I0819 11:32:10.910415  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:10.910424  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:10.910434  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:10.910445  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:10.914305  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:11.411478  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:11.411506  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:11.411517  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:11.411526  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:11.415876  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:11.910593  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:11.910621  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:11.910632  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:11.910639  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:11.914646  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:12.411531  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:12.411558  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:12.411570  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:12.411576  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:12.417289  121308 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 11:32:12.910720  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:12.910748  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:12.910763  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:12.910769  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:12.913724  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:12.914406  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:13.410815  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:13.410843  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:13.410854  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:13.410859  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:13.415382  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:13.911132  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:13.911161  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:13.911173  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:13.911181  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:13.914748  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:14.410563  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:14.410589  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:14.410599  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:14.410605  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:14.416656  121308 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 11:32:14.910651  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:14.910682  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:14.910693  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:14.910702  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:14.914226  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:14.914790  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:15.411431  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:15.411455  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:15.411464  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:15.411472  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:15.417235  121308 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 11:32:15.911416  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:15.911438  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:15.911447  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:15.911452  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:15.914764  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:16.410694  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:16.410720  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:16.410732  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:16.410745  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:16.416771  121308 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 11:32:16.911232  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:16.911258  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:16.911266  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:16.911271  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:16.914931  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:16.915536  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:17.410952  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:17.410974  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:17.410983  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:17.410987  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:17.414414  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:17.910675  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:17.910697  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:17.910706  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:17.910709  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:17.914143  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:18.410886  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:18.410920  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:18.410930  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:18.410936  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:18.415313  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:18.911459  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:18.911495  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:18.911505  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:18.911509  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:18.915032  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:18.915772  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:19.411110  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:19.411133  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:19.411143  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:19.411148  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:19.414481  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:19.911465  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:19.911490  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:19.911501  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:19.911507  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:19.915808  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:20.411079  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:20.411104  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:20.411113  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:20.411117  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:20.415246  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:20.911185  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:20.911213  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:20.911224  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:20.911230  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:20.914330  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:21.410966  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:21.410991  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:21.411007  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:21.411012  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:21.414226  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:21.414713  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:21.911097  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:21.911140  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:21.911149  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:21.911153  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:21.914344  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:22.411221  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:22.411250  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:22.411259  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:22.411264  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:22.415201  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:22.910568  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:22.910593  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:22.910602  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:22.910606  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:22.913979  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:23.410772  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:23.410796  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:23.410805  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:23.410809  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:23.414175  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:23.414786  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:23.911039  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:23.911064  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:23.911076  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:23.911085  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:23.913720  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:24.410570  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:24.410600  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:24.410611  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:24.410617  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:24.415505  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:24.910629  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:24.910662  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:24.910671  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:24.910677  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:24.914338  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:25.410692  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:25.410716  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:25.410725  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:25.410729  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:25.414446  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:25.415093  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:25.911409  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:25.911431  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:25.911439  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:25.911443  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:25.914958  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:26.411578  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:26.411602  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:26.411610  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:26.411615  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:26.415244  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:26.911151  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:26.911178  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:26.911188  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:26.911203  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:26.914741  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:27.411029  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:27.411053  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:27.411062  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:27.411068  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:27.414377  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:27.910808  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:27.910834  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:27.910845  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:27.910851  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:27.914609  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:27.915274  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:28.410799  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:28.410824  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.410832  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.410838  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.413934  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.910953  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:28.910979  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.910990  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.910996  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.914995  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.915716  121308 node_ready.go:49] node "ha-503856-m03" has status "Ready":"True"
	I0819 11:32:28.915755  121308 node_ready.go:38] duration metric: took 18.005420591s for node "ha-503856-m03" to be "Ready" ...
	I0819 11:32:28.915771  121308 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:32:28.915849  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:28.915862  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.915873  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.915883  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.924660  121308 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 11:32:28.933241  121308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.933375  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2jdlw
	I0819 11:32:28.933389  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.933400  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.933408  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.938241  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:28.938927  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:28.938946  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.938954  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.938959  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.942343  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.942928  121308 pod_ready.go:93] pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:28.942947  121308 pod_ready.go:82] duration metric: took 9.662876ms for pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.942959  121308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.943027  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-5dbrz
	I0819 11:32:28.943036  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.943045  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.943052  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.946195  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.947301  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:28.947320  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.947328  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.947331  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.950305  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:28.951142  121308 pod_ready.go:93] pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:28.951162  121308 pod_ready.go:82] duration metric: took 8.195322ms for pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.951172  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.951246  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856
	I0819 11:32:28.951256  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.951266  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.951273  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.953998  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:28.954637  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:28.954653  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.954677  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.954684  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.960807  121308 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 11:32:28.961306  121308 pod_ready.go:93] pod "etcd-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:28.961327  121308 pod_ready.go:82] duration metric: took 10.149483ms for pod "etcd-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.961337  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.961403  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856-m02
	I0819 11:32:28.961409  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.961417  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.961424  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.964967  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.965819  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:28.965835  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.965846  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.965850  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.968576  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:28.969109  121308 pod_ready.go:93] pod "etcd-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:28.969129  121308 pod_ready.go:82] duration metric: took 7.781053ms for pod "etcd-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.969139  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.111548  121308 request.go:632] Waited for 142.335527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856-m03
	I0819 11:32:29.111636  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856-m03
	I0819 11:32:29.111661  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.111676  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.111684  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.115707  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:29.311960  121308 request.go:632] Waited for 195.380175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:29.312028  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:29.312036  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.312047  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.312057  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.315622  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:29.316104  121308 pod_ready.go:93] pod "etcd-ha-503856-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:29.316128  121308 pod_ready.go:82] duration metric: took 346.980355ms for pod "etcd-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.316146  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.511229  121308 request.go:632] Waited for 195.001883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856
	I0819 11:32:29.511293  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856
	I0819 11:32:29.511300  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.511307  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.511317  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.514586  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:29.711790  121308 request.go:632] Waited for 196.451519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:29.711891  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:29.711900  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.711908  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.711912  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.716113  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:29.716932  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:29.716951  121308 pod_ready.go:82] duration metric: took 400.798611ms for pod "kube-apiserver-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.716961  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.911080  121308 request.go:632] Waited for 194.03651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m02
	I0819 11:32:29.911189  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m02
	I0819 11:32:29.911211  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.911219  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.911224  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.914605  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.111766  121308 request.go:632] Waited for 196.114055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:30.111831  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:30.111837  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.111845  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.111850  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.115295  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.115935  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:30.115955  121308 pod_ready.go:82] duration metric: took 398.985634ms for pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.115965  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.311084  121308 request.go:632] Waited for 195.040261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m03
	I0819 11:32:30.311168  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m03
	I0819 11:32:30.311174  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.311181  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.311186  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.314832  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.511950  121308 request.go:632] Waited for 196.362241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:30.512008  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:30.512013  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.512021  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.512025  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.515260  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.515826  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:30.515848  121308 pod_ready.go:82] duration metric: took 399.875288ms for pod "kube-apiserver-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.515862  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.712012  121308 request.go:632] Waited for 196.07124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856
	I0819 11:32:30.712121  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856
	I0819 11:32:30.712132  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.712145  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.712155  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.715522  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.911606  121308 request.go:632] Waited for 195.377819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:30.911698  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:30.911704  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.911711  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.911720  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.915054  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.915831  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:30.915853  121308 pod_ready.go:82] duration metric: took 399.983431ms for pod "kube-controller-manager-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.915864  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.111974  121308 request.go:632] Waited for 196.007678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m02
	I0819 11:32:31.112032  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m02
	I0819 11:32:31.112038  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.112046  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.112051  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.115564  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:31.311809  121308 request.go:632] Waited for 195.413562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:31.311879  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:31.311886  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.311898  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.311906  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.315398  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:31.316219  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:31.316240  121308 pod_ready.go:82] duration metric: took 400.370818ms for pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.316250  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.511393  121308 request.go:632] Waited for 195.036798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m03
	I0819 11:32:31.511463  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m03
	I0819 11:32:31.511471  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.511484  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.511490  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.515100  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:31.711211  121308 request.go:632] Waited for 195.29388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:31.711301  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:31.711312  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.711324  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.711332  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.714829  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:31.715366  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:31.715390  121308 pod_ready.go:82] duration metric: took 399.13227ms for pod "kube-controller-manager-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.715403  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xzr9" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.911422  121308 request.go:632] Waited for 195.934341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xzr9
	I0819 11:32:31.911481  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xzr9
	I0819 11:32:31.911488  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.911496  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.911501  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.914817  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.111836  121308 request.go:632] Waited for 196.351993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:32.111924  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:32.111933  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.111946  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.111954  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.115286  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.115869  121308 pod_ready.go:93] pod "kube-proxy-8xzr9" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:32.115888  121308 pod_ready.go:82] duration metric: took 400.478685ms for pod "kube-proxy-8xzr9" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.115901  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d6zw9" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.310981  121308 request.go:632] Waited for 194.990168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6zw9
	I0819 11:32:32.311053  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6zw9
	I0819 11:32:32.311060  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.311068  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.311075  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.314660  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.511671  121308 request.go:632] Waited for 196.349477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:32.511741  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:32.511749  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.511760  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.511766  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.515260  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.515890  121308 pod_ready.go:93] pod "kube-proxy-d6zw9" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:32.515912  121308 pod_ready.go:82] duration metric: took 400.003811ms for pod "kube-proxy-d6zw9" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.515922  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2f6h" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.711443  121308 request.go:632] Waited for 195.447544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2f6h
	I0819 11:32:32.711526  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2f6h
	I0819 11:32:32.711533  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.711553  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.711577  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.715028  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.911309  121308 request.go:632] Waited for 195.361052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:32.911402  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:32.911416  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.911429  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.911438  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.914872  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.915563  121308 pod_ready.go:93] pod "kube-proxy-j2f6h" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:32.915584  121308 pod_ready.go:82] duration metric: took 399.655981ms for pod "kube-proxy-j2f6h" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.915598  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.111779  121308 request.go:632] Waited for 196.080229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856
	I0819 11:32:33.111840  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856
	I0819 11:32:33.111845  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.111852  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.111856  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.115006  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:33.312046  121308 request.go:632] Waited for 196.455562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:33.312120  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:33.312128  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.312139  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.312149  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.315807  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:33.316327  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:33.316347  121308 pod_ready.go:82] duration metric: took 400.741583ms for pod "kube-scheduler-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.316358  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.511935  121308 request.go:632] Waited for 195.48573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m02
	I0819 11:32:33.512010  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m02
	I0819 11:32:33.512019  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.512027  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.512033  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.515400  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:33.712035  121308 request.go:632] Waited for 195.865929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:33.712099  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:33.712111  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.712122  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.712130  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.715572  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:33.716554  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:33.716573  121308 pod_ready.go:82] duration metric: took 400.209171ms for pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.716583  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.911698  121308 request.go:632] Waited for 195.027976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m03
	I0819 11:32:33.911791  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m03
	I0819 11:32:33.911800  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.911811  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.911821  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.915154  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:34.111126  121308 request.go:632] Waited for 195.328636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:34.111226  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:34.111234  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.111243  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.111251  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.115781  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:34.116320  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:34.116341  121308 pod_ready.go:82] duration metric: took 399.750695ms for pod "kube-scheduler-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:34.116353  121308 pod_ready.go:39] duration metric: took 5.200563994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:32:34.116367  121308 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:32:34.116436  121308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:32:34.131085  121308 api_server.go:72] duration metric: took 23.529785868s to wait for apiserver process to appear ...
	I0819 11:32:34.131122  121308 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:32:34.131146  121308 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0819 11:32:34.138628  121308 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0819 11:32:34.138734  121308 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0819 11:32:34.138748  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.138759  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.138767  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.139756  121308 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 11:32:34.139832  121308 api_server.go:141] control plane version: v1.31.0
	I0819 11:32:34.139848  121308 api_server.go:131] duration metric: took 8.718688ms to wait for apiserver health ...
	I0819 11:32:34.139859  121308 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 11:32:34.311012  121308 request.go:632] Waited for 171.070779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:34.311097  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:34.311105  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.311115  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.311124  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.318533  121308 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 11:32:34.326586  121308 system_pods.go:59] 24 kube-system pods found
	I0819 11:32:34.326624  121308 system_pods.go:61] "coredns-6f6b679f8f-2jdlw" [ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd] Running
	I0819 11:32:34.326631  121308 system_pods.go:61] "coredns-6f6b679f8f-5dbrz" [5530828e-1061-434c-ad2f-80847f3ab171] Running
	I0819 11:32:34.326637  121308 system_pods.go:61] "etcd-ha-503856" [b8932b07-bc71-4d14-bc4c-a323aa900891] Running
	I0819 11:32:34.326642  121308 system_pods.go:61] "etcd-ha-503856-m02" [7c495867-e51d-4100-b0d8-2794e45a18c4] Running
	I0819 11:32:34.326647  121308 system_pods.go:61] "etcd-ha-503856-m03" [8a5f4851-a71f-4491-916b-f5b75929b327] Running
	I0819 11:32:34.326651  121308 system_pods.go:61] "kindnet-hvszk" [5484350e-fd9c-4901-984b-05f77e1d20ba] Running
	I0819 11:32:34.326655  121308 system_pods.go:61] "kindnet-rnjwj" [1a6e4b0d-f3f2-45e3-b66e-b0457ba61723] Running
	I0819 11:32:34.326660  121308 system_pods.go:61] "kindnet-st2mx" [99e7c93b-40a9-4902-b1a5-5a6bcc55735c] Running
	I0819 11:32:34.326664  121308 system_pods.go:61] "kube-apiserver-ha-503856" [bdea9580-2d12-4e91-acbd-5a5e08f5637c] Running
	I0819 11:32:34.326669  121308 system_pods.go:61] "kube-apiserver-ha-503856-m02" [a1d5950d-50bc-42e8-b432-27425aa4b80d] Running
	I0819 11:32:34.326674  121308 system_pods.go:61] "kube-apiserver-ha-503856-m03" [92a576da-58d9-42cf-90ed-c82f208e060f] Running
	I0819 11:32:34.326687  121308 system_pods.go:61] "kube-controller-manager-ha-503856" [36c9c0c5-0b9e-4fce-a34f-bf1c21590af4] Running
	I0819 11:32:34.326694  121308 system_pods.go:61] "kube-controller-manager-ha-503856-m02" [a58cf93b-47a4-4cb7-80e1-afb525b1a2b2] Running
	I0819 11:32:34.326699  121308 system_pods.go:61] "kube-controller-manager-ha-503856-m03" [f0ee565d-81f7-4f17-9e58-8d79f5defda6] Running
	I0819 11:32:34.326705  121308 system_pods.go:61] "kube-proxy-8xzr9" [436c9779-87db-44f7-9650-7e4b5431fbed] Running
	I0819 11:32:34.326711  121308 system_pods.go:61] "kube-proxy-d6zw9" [f8054009-c06a-4ccc-b6c4-22e0f6bb632a] Running
	I0819 11:32:34.326720  121308 system_pods.go:61] "kube-proxy-j2f6h" [e9623c18-7b96-49b5-8cc6-6ea700eec47e] Running
	I0819 11:32:34.326726  121308 system_pods.go:61] "kube-scheduler-ha-503856" [2c8c7e78-ded0-47ff-8720-b1c36c9123c6] Running
	I0819 11:32:34.326732  121308 system_pods.go:61] "kube-scheduler-ha-503856-m02" [6f51735c-0f3e-49f8-aff7-c6c485e0e653] Running
	I0819 11:32:34.326738  121308 system_pods.go:61] "kube-scheduler-ha-503856-m03" [afad7788-d0c7-4959-91b5-209ced760d93] Running
	I0819 11:32:34.326743  121308 system_pods.go:61] "kube-vip-ha-503856" [a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8] Running
	I0819 11:32:34.326749  121308 system_pods.go:61] "kube-vip-ha-503856-m02" [5d66ea23-6878-403f-88df-94bf42ad5800] Running
	I0819 11:32:34.326754  121308 system_pods.go:61] "kube-vip-ha-503856-m03" [4d116083-4440-468e-ad2d-1364e601db1e] Running
	I0819 11:32:34.326767  121308 system_pods.go:61] "storage-provisioner" [4c212413-ac90-45fb-92de-bfd9e9115540] Running
	I0819 11:32:34.326779  121308 system_pods.go:74] duration metric: took 186.910185ms to wait for pod list to return data ...
	I0819 11:32:34.326790  121308 default_sa.go:34] waiting for default service account to be created ...
	I0819 11:32:34.511195  121308 request.go:632] Waited for 184.309012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0819 11:32:34.511255  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0819 11:32:34.511261  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.511271  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.511278  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.514975  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:34.515093  121308 default_sa.go:45] found service account: "default"
	I0819 11:32:34.515108  121308 default_sa.go:55] duration metric: took 188.308694ms for default service account to be created ...
	I0819 11:32:34.515117  121308 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 11:32:34.711477  121308 request.go:632] Waited for 196.27503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:34.711569  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:34.711582  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.711590  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.711596  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.717916  121308 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 11:32:34.728200  121308 system_pods.go:86] 24 kube-system pods found
	I0819 11:32:34.728231  121308 system_pods.go:89] "coredns-6f6b679f8f-2jdlw" [ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd] Running
	I0819 11:32:34.728237  121308 system_pods.go:89] "coredns-6f6b679f8f-5dbrz" [5530828e-1061-434c-ad2f-80847f3ab171] Running
	I0819 11:32:34.728242  121308 system_pods.go:89] "etcd-ha-503856" [b8932b07-bc71-4d14-bc4c-a323aa900891] Running
	I0819 11:32:34.728246  121308 system_pods.go:89] "etcd-ha-503856-m02" [7c495867-e51d-4100-b0d8-2794e45a18c4] Running
	I0819 11:32:34.728249  121308 system_pods.go:89] "etcd-ha-503856-m03" [8a5f4851-a71f-4491-916b-f5b75929b327] Running
	I0819 11:32:34.728253  121308 system_pods.go:89] "kindnet-hvszk" [5484350e-fd9c-4901-984b-05f77e1d20ba] Running
	I0819 11:32:34.728256  121308 system_pods.go:89] "kindnet-rnjwj" [1a6e4b0d-f3f2-45e3-b66e-b0457ba61723] Running
	I0819 11:32:34.728259  121308 system_pods.go:89] "kindnet-st2mx" [99e7c93b-40a9-4902-b1a5-5a6bcc55735c] Running
	I0819 11:32:34.728262  121308 system_pods.go:89] "kube-apiserver-ha-503856" [bdea9580-2d12-4e91-acbd-5a5e08f5637c] Running
	I0819 11:32:34.728266  121308 system_pods.go:89] "kube-apiserver-ha-503856-m02" [a1d5950d-50bc-42e8-b432-27425aa4b80d] Running
	I0819 11:32:34.728269  121308 system_pods.go:89] "kube-apiserver-ha-503856-m03" [92a576da-58d9-42cf-90ed-c82f208e060f] Running
	I0819 11:32:34.728273  121308 system_pods.go:89] "kube-controller-manager-ha-503856" [36c9c0c5-0b9e-4fce-a34f-bf1c21590af4] Running
	I0819 11:32:34.728276  121308 system_pods.go:89] "kube-controller-manager-ha-503856-m02" [a58cf93b-47a4-4cb7-80e1-afb525b1a2b2] Running
	I0819 11:32:34.728280  121308 system_pods.go:89] "kube-controller-manager-ha-503856-m03" [f0ee565d-81f7-4f17-9e58-8d79f5defda6] Running
	I0819 11:32:34.728282  121308 system_pods.go:89] "kube-proxy-8xzr9" [436c9779-87db-44f7-9650-7e4b5431fbed] Running
	I0819 11:32:34.728285  121308 system_pods.go:89] "kube-proxy-d6zw9" [f8054009-c06a-4ccc-b6c4-22e0f6bb632a] Running
	I0819 11:32:34.728289  121308 system_pods.go:89] "kube-proxy-j2f6h" [e9623c18-7b96-49b5-8cc6-6ea700eec47e] Running
	I0819 11:32:34.728292  121308 system_pods.go:89] "kube-scheduler-ha-503856" [2c8c7e78-ded0-47ff-8720-b1c36c9123c6] Running
	I0819 11:32:34.728295  121308 system_pods.go:89] "kube-scheduler-ha-503856-m02" [6f51735c-0f3e-49f8-aff7-c6c485e0e653] Running
	I0819 11:32:34.728298  121308 system_pods.go:89] "kube-scheduler-ha-503856-m03" [afad7788-d0c7-4959-91b5-209ced760d93] Running
	I0819 11:32:34.728302  121308 system_pods.go:89] "kube-vip-ha-503856" [a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8] Running
	I0819 11:32:34.728304  121308 system_pods.go:89] "kube-vip-ha-503856-m02" [5d66ea23-6878-403f-88df-94bf42ad5800] Running
	I0819 11:32:34.728307  121308 system_pods.go:89] "kube-vip-ha-503856-m03" [4d116083-4440-468e-ad2d-1364e601db1e] Running
	I0819 11:32:34.728310  121308 system_pods.go:89] "storage-provisioner" [4c212413-ac90-45fb-92de-bfd9e9115540] Running
	I0819 11:32:34.728317  121308 system_pods.go:126] duration metric: took 213.192293ms to wait for k8s-apps to be running ...
	I0819 11:32:34.728325  121308 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 11:32:34.728370  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:32:34.743448  121308 system_svc.go:56] duration metric: took 15.111773ms WaitForService to wait for kubelet
	I0819 11:32:34.743483  121308 kubeadm.go:582] duration metric: took 24.142193278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:32:34.743504  121308 node_conditions.go:102] verifying NodePressure condition ...
	I0819 11:32:34.911911  121308 request.go:632] Waited for 168.309732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0819 11:32:34.911988  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0819 11:32:34.911994  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.912002  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.912008  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.916346  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:34.917682  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:32:34.917705  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:32:34.917716  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:32:34.917719  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:32:34.917723  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:32:34.917726  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:32:34.917730  121308 node_conditions.go:105] duration metric: took 174.221965ms to run NodePressure ...
	I0819 11:32:34.917748  121308 start.go:241] waiting for startup goroutines ...
	I0819 11:32:34.917768  121308 start.go:255] writing updated cluster config ...
	I0819 11:32:34.918055  121308 ssh_runner.go:195] Run: rm -f paused
	I0819 11:32:34.969413  121308 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 11:32:34.972111  121308 out.go:177] * Done! kubectl is now configured to use "ha-503856" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.886265274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a1e8a99-8e51-44eb-b8b3-1b3fe7834c2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.886492313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067158500890442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39,PodSandboxId:7074b09831f6bd3b03135218f0698131342183be25852fa7f92d1bd429ec790a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067024291050172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024223121643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024221336819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc
67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724067012619972475,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406700
8316850072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112,PodSandboxId:2c2e375766b14429ccbca66cbd90a4de54eadb91037b4cff34cc1cb046a93549,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406699967
8401575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243cc46027459bd9ae669bd4959ae8b2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724066997299379623,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724066997296906648,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e,PodSandboxId:6a5a214f4ecfbe6589eac54fcf3c31672cfe0befef185327158583c48ed17b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724066997259670096,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e,PodSandboxId:874ce4bf24c62441806a52c87405c4fc17310af6a25939f4fa49941f7e634a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724066997211052852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a1e8a99-8e51-44eb-b8b3-1b3fe7834c2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.923377193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fbfaaba-3724-4857-8a95-4d3728012259 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.923450797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fbfaaba-3724-4857-8a95-4d3728012259 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.924409590Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1b06264-d21c-4320-a66c-89fe9e174795 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.924846307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067370924825806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1b06264-d21c-4320-a66c-89fe9e174795 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.925319974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67d7fcb2-072a-484c-96f6-bb3b3508ee1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.925369343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67d7fcb2-072a-484c-96f6-bb3b3508ee1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.925593416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067158500890442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39,PodSandboxId:7074b09831f6bd3b03135218f0698131342183be25852fa7f92d1bd429ec790a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067024291050172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024223121643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024221336819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc
67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724067012619972475,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406700
8316850072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112,PodSandboxId:2c2e375766b14429ccbca66cbd90a4de54eadb91037b4cff34cc1cb046a93549,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406699967
8401575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243cc46027459bd9ae669bd4959ae8b2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724066997299379623,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724066997296906648,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e,PodSandboxId:6a5a214f4ecfbe6589eac54fcf3c31672cfe0befef185327158583c48ed17b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724066997259670096,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e,PodSandboxId:874ce4bf24c62441806a52c87405c4fc17310af6a25939f4fa49941f7e634a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724066997211052852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67d7fcb2-072a-484c-96f6-bb3b3508ee1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.969373218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60e1efc3-8246-4266-a57a-7f8e1608416a name=/runtime.v1.RuntimeService/Version
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.969446718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60e1efc3-8246-4266-a57a-7f8e1608416a name=/runtime.v1.RuntimeService/Version
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.970773443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22f0933b-776f-442b-b5eb-ef56a7320df3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.971507524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067370971479533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22f0933b-776f-442b-b5eb-ef56a7320df3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.972373331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=360007af-7d0f-459d-a155-984e6533ef7c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.972484742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=360007af-7d0f-459d-a155-984e6533ef7c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:10 ha-503856 crio[678]: time="2024-08-19 11:36:10.972726083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067158500890442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39,PodSandboxId:7074b09831f6bd3b03135218f0698131342183be25852fa7f92d1bd429ec790a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067024291050172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024223121643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024221336819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc
67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724067012619972475,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406700
8316850072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112,PodSandboxId:2c2e375766b14429ccbca66cbd90a4de54eadb91037b4cff34cc1cb046a93549,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406699967
8401575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243cc46027459bd9ae669bd4959ae8b2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724066997299379623,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724066997296906648,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e,PodSandboxId:6a5a214f4ecfbe6589eac54fcf3c31672cfe0befef185327158583c48ed17b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724066997259670096,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e,PodSandboxId:874ce4bf24c62441806a52c87405c4fc17310af6a25939f4fa49941f7e634a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724066997211052852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=360007af-7d0f-459d-a155-984e6533ef7c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.010915166Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6634115-0584-4943-b40e-53b2a60cc85d name=/runtime.v1.RuntimeService/Version
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.010992495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6634115-0584-4943-b40e-53b2a60cc85d name=/runtime.v1.RuntimeService/Version
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.012320888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85429f3e-e78e-4856-ad72-8b39f5e02ae5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.012936984Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067371012913433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85429f3e-e78e-4856-ad72-8b39f5e02ae5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.013474480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b8ff080-ae82-403c-b964-b2126535802e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.013523378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b8ff080-ae82-403c-b964-b2126535802e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.013868620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067158500890442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39,PodSandboxId:7074b09831f6bd3b03135218f0698131342183be25852fa7f92d1bd429ec790a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067024291050172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024223121643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024221336819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc
67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724067012619972475,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406700
8316850072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112,PodSandboxId:2c2e375766b14429ccbca66cbd90a4de54eadb91037b4cff34cc1cb046a93549,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406699967
8401575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243cc46027459bd9ae669bd4959ae8b2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724066997299379623,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724066997296906648,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e,PodSandboxId:6a5a214f4ecfbe6589eac54fcf3c31672cfe0befef185327158583c48ed17b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724066997259670096,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e,PodSandboxId:874ce4bf24c62441806a52c87405c4fc17310af6a25939f4fa49941f7e634a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724066997211052852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b8ff080-ae82-403c-b964-b2126535802e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.029192496Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=7ebea9bc-2115-4610-bdc0-2934aa0b1a60 name=/runtime.v1.RuntimeService/Status
	Aug 19 11:36:11 ha-503856 crio[678]: time="2024-08-19 11:36:11.029266282Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7ebea9bc-2115-4610-bdc0-2934aa0b1a60 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56a5ad9cc18e7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1191cb555eb55       busybox-7dff88458-7wpbx
	6c7867b6691ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   7074b09831f6b       storage-provisioner
	e67513ebd15d0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   13c07aa9a0025       coredns-6f6b679f8f-5dbrz
	8315e44800080       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   0b0b0a070f3ec       coredns-6f6b679f8f-2jdlw
	1964134e9de80       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    5 minutes ago       Running             kindnet-cni               0                   9079c84056e4b       kindnet-st2mx
	68730d308f145       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   adace0914115c       kube-proxy-d6zw9
	11a47171a5438       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2c2e375766b14       kube-vip-ha-503856
	ccea80d1a22a4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   982016c43ab0e       kube-scheduler-ha-503856
	3879d2de39f1c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   eb7c9eb1ba042       etcd-ha-503856
	c0a1ce45d7b78       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   6a5a214f4ecfb       kube-apiserver-ha-503856
	df01b4ed6011a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   874ce4bf24c62       kube-controller-manager-ha-503856
	
	
	==> coredns [8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464] <==
	[INFO] 10.244.0.4:53844 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014011s
	[INFO] 10.244.3.2:37901 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001741064s
	[INFO] 10.244.3.2:44495 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198321s
	[INFO] 10.244.3.2:59991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001295677s
	[INFO] 10.244.3.2:36199 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168276s
	[INFO] 10.244.3.2:56390 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118777s
	[INFO] 10.244.3.2:60188 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110134s
	[INFO] 10.244.1.2:48283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110043s
	[INFO] 10.244.1.2:47868 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001551069s
	[INFO] 10.244.1.2:40080 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132463s
	[INFO] 10.244.1.2:39365 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001154088s
	[INFO] 10.244.1.2:42435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074226s
	[INFO] 10.244.0.4:41562 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076296s
	[INFO] 10.244.0.4:56190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067218s
	[INFO] 10.244.3.2:36444 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119378s
	[INFO] 10.244.3.2:38880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151765s
	[INFO] 10.244.1.2:43281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016005s
	[INFO] 10.244.1.2:44768 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098293s
	[INFO] 10.244.0.4:42211 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129163s
	[INFO] 10.244.0.4:53178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082891s
	[INFO] 10.244.3.2:39486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118564s
	[INFO] 10.244.3.2:46262 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112723s
	[INFO] 10.244.3.2:50068 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106233s
	[INFO] 10.244.1.2:43781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134028s
	[INFO] 10.244.1.2:47607 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071487s
	
	
	==> coredns [e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de] <==
	[INFO] 10.244.1.2:45826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116203s
	[INFO] 10.244.1.2:51336 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000106263s
	[INFO] 10.244.1.2:52489 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001893734s
	[INFO] 10.244.0.4:58770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111417s
	[INFO] 10.244.0.4:32786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159712s
	[INFO] 10.244.0.4:34773 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133937s
	[INFO] 10.244.0.4:34211 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003320974s
	[INFO] 10.244.0.4:44413 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105874s
	[INFO] 10.244.0.4:37795 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067103s
	[INFO] 10.244.3.2:48365 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108129s
	[INFO] 10.244.3.2:35563 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101277s
	[INFO] 10.244.1.2:41209 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111152s
	[INFO] 10.244.1.2:59241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195927s
	[INFO] 10.244.1.2:32916 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097287s
	[INFO] 10.244.0.4:53548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104877s
	[INFO] 10.244.0.4:55650 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105726s
	[INFO] 10.244.3.2:40741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204087s
	[INFO] 10.244.3.2:41373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105987s
	[INFO] 10.244.1.2:57537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000193166s
	[INFO] 10.244.1.2:40497 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080869s
	[INFO] 10.244.0.4:33281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136165s
	[INFO] 10.244.0.4:49164 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000302537s
	[INFO] 10.244.3.2:54372 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157216s
	[INFO] 10.244.1.2:40968 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206142s
	[INFO] 10.244.1.2:54797 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102712s
	
	
	==> describe nodes <==
	Name:               ha-503856
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_30_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:30:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:33:09 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:33:09 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:33:09 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:33:09 +0000   Mon, 19 Aug 2024 11:30:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-503856
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebf7fa993760403a8b3080e5ea2bdf31
	  System UUID:                ebf7fa99-3760-403a-8b30-80e5ea2bdf31
	  Boot ID:                    f3b2611c-5dfd-45ef-8747-94b35364374b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7wpbx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 coredns-6f6b679f8f-2jdlw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m3s
	  kube-system                 coredns-6f6b679f8f-5dbrz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m3s
	  kube-system                 etcd-ha-503856                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m6s
	  kube-system                 kindnet-st2mx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m4s
	  kube-system                 kube-apiserver-ha-503856             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-controller-manager-ha-503856    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-proxy-d6zw9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-ha-503856             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-vip-ha-503856                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m2s   kube-proxy       
	  Normal  Starting                 6m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m5s   kubelet          Node ha-503856 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s   kubelet          Node ha-503856 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s   kubelet          Node ha-503856 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m4s   node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal  NodeReady                5m48s  kubelet          Node ha-503856 status is now: NodeReady
	  Normal  RegisteredNode           5m8s   node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal  RegisteredNode           3m56s  node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	
	
	Name:               ha-503856-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_30_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:30:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:33:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 11:32:58 +0000   Mon, 19 Aug 2024 11:34:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 11:32:58 +0000   Mon, 19 Aug 2024 11:34:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 11:32:58 +0000   Mon, 19 Aug 2024 11:34:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 11:32:58 +0000   Mon, 19 Aug 2024 11:34:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-503856-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a5c9c65d0cb479397609eb1cad01b44
	  System UUID:                9a5c9c65-d0cb-4793-9760-9eb1cad01b44
	  Boot ID:                    c1b5d088-ad90-41a3-b25b-40f79fc85586
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nxhq6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 etcd-ha-503856-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m14s
	  kube-system                 kindnet-rnjwj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m16s
	  kube-system                 kube-apiserver-ha-503856-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-controller-manager-ha-503856-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-j2f6h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-scheduler-ha-503856-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-vip-ha-503856-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m16s                  cidrAllocator    Node ha-503856-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-503856-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-503856-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-503856-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-503856-m02 status is now: NodeNotReady
	
	
	Name:               ha-503856-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_32_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:32:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:33:08 +0000   Mon, 19 Aug 2024 11:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:33:08 +0000   Mon, 19 Aug 2024 11:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:33:08 +0000   Mon, 19 Aug 2024 11:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:33:08 +0000   Mon, 19 Aug 2024 11:32:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    ha-503856-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d357d9a38d274836bfe734b86d4bde83
	  System UUID:                d357d9a3-8d27-4836-bfe7-34b86d4bde83
	  Boot ID:                    3304d774-5407-4f16-9814-e7bbac644ac4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbmlj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 etcd-ha-503856-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m2s
	  kube-system                 kindnet-hvszk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-apiserver-ha-503856-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-ha-503856-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-proxy-8xzr9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-ha-503856-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-vip-ha-503856-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  CIDRAssignmentFailed     4m4s                 cidrAllocator    Node ha-503856-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-503856-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-503856-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-503856-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	
	
	Name:               ha-503856-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_33_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:33:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:36:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:33:41 +0000   Mon, 19 Aug 2024 11:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:33:41 +0000   Mon, 19 Aug 2024 11:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:33:41 +0000   Mon, 19 Aug 2024 11:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:33:41 +0000   Mon, 19 Aug 2024 11:33:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-503856-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fb3b2ab1e7b42139f0ea868d31218ff
	  System UUID:                9fb3b2ab-1e7b-4213-9f0e-a868d31218ff
	  Boot ID:                    64f131f4-dcd9-4d4f-be79-4fd66dede958
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h29sh       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-4kpcq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m56s              kube-proxy       
	  Normal  CIDRAssignmentFailed     3m                 cidrAllocator    Node ha-503856-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-503856-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-503856-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-503856-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal  RegisteredNode           2m58s              node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal  RegisteredNode           2m56s              node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal  NodeReady                2m41s              kubelet          Node ha-503856-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 11:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047854] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036981] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.730064] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.920232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.453643] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.042926] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.060482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062102] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.195986] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.137965] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.282518] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.003020] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.667712] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.056031] kauditd_printk_skb: 158 callbacks suppressed
	[Aug19 11:30] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.046565] kauditd_printk_skb: 60 callbacks suppressed
	[Aug19 11:31] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a] <==
	{"level":"warn","ts":"2024-08-19T11:36:11.271891Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.279298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.283579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.285188Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.297620Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.305710Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.312914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.317517Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.321238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.327872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.335903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.342707Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.347238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.351507Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.357912Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.364242Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.370646Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.375007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.379582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.385631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.388394Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.394615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.400630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:36:11.416198Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a2b83f2dcb1ed0d","rtt":"943.162µs","error":"dial tcp 192.168.39.183:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-19T11:36:11.416307Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a2b83f2dcb1ed0d","rtt":"8.421731ms","error":"dial tcp 192.168.39.183:2380: i/o timeout"}
	
	
	==> kernel <==
	 11:36:11 up 6 min,  0 users,  load average: 0.42, 0.28, 0.15
	Linux ha-503856 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50] <==
	I0819 11:35:33.537157       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:35:43.537159       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:35:43.537267       1 main.go:299] handling current node
	I0819 11:35:43.537294       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:35:43.537312       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:35:43.537435       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:35:43.537460       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:35:43.537526       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:35:43.537558       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:35:53.537808       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:35:53.537917       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:35:53.538124       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:35:53.538158       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:35:53.538230       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:35:53.538249       1 main.go:299] handling current node
	I0819 11:35:53.538271       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:35:53.538287       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:36:03.538088       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:36:03.538133       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:36:03.538267       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:36:03.538290       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:36:03.538341       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:36:03.538357       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:36:03.538418       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:36:03.538438       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e] <==
	I0819 11:30:05.944658       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 11:30:05.962433       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 11:30:05.972121       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 11:30:07.781535       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 11:30:07.870178       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0819 11:32:08.060799       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0819 11:32:08.061328       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 322.299µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0819 11:32:08.062189       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0819 11:32:08.063475       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0819 11:32:08.065800       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.098798ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0819 11:32:39.975641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54540: use of closed network connection
	E0819 11:32:40.165200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54556: use of closed network connection
	E0819 11:32:40.364705       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54574: use of closed network connection
	E0819 11:32:40.560779       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54592: use of closed network connection
	E0819 11:32:40.746140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54616: use of closed network connection
	E0819 11:32:40.925040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54636: use of closed network connection
	E0819 11:32:41.094328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54650: use of closed network connection
	E0819 11:32:41.277612       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54666: use of closed network connection
	E0819 11:32:41.452804       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54688: use of closed network connection
	E0819 11:32:41.750815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54714: use of closed network connection
	E0819 11:32:41.936432       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54740: use of closed network connection
	E0819 11:32:42.120632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54764: use of closed network connection
	E0819 11:32:42.300925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54786: use of closed network connection
	E0819 11:32:42.487047       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	W0819 11:34:02.184667       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.122]
	
	
	==> kube-controller-manager [df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e] <==
	I0819 11:33:11.171483       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-503856-m04" podCIDRs=["10.244.4.0/24"]
	I0819 11:33:11.171595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:11.171654       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:11.180152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:11.444236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:11.838190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:12.194495       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-503856-m04"
	I0819 11:33:12.279464       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:13.960134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:14.009602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:15.267879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:15.302991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:21.512563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:30.356372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:30.356464       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-503856-m04"
	I0819 11:33:30.371949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:32.207322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:41.837921       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:34:30.300213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	I0819 11:34:30.300638       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-503856-m04"
	I0819 11:34:30.320643       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	I0819 11:34:30.353664       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.521527ms"
	I0819 11:34:30.353908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.913µs"
	I0819 11:34:32.311614       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	I0819 11:34:35.497733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	
	
	==> kube-proxy [68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 11:30:08.502363       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 11:30:08.511263       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E0819 11:30:08.511399       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:30:08.545498       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 11:30:08.545608       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 11:30:08.545648       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:30:08.548637       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:30:08.549020       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:30:08.549220       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:30:08.550765       1 config.go:197] "Starting service config controller"
	I0819 11:30:08.550867       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:30:08.550913       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:30:08.550930       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:30:08.551577       1 config.go:326] "Starting node config controller"
	I0819 11:30:08.551621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:30:08.651853       1 shared_informer.go:320] Caches are synced for node config
	I0819 11:30:08.652003       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:30:08.652014       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674] <==
	W0819 11:30:01.347827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:30:01.349511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.452894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:30:01.453008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.486121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 11:30:01.486169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.488303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:30:01.488336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.523022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:30:01.523111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.525952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:30:01.526028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.659419       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:30:01.660325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 11:30:03.075764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 11:33:11.218857       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-h29sh\": pod kindnet-h29sh is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-h29sh" node="ha-503856-m04"
	E0819 11:33:11.219015       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-h29sh\": pod kindnet-h29sh is already assigned to node \"ha-503856-m04\"" pod="kube-system/kindnet-h29sh"
	E0819 11:33:11.221900       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4kpcq\": pod kube-proxy-4kpcq is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4kpcq" node="ha-503856-m04"
	E0819 11:33:11.221962       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f038ca5-2e98-4126-9959-f24f6ab3a802(kube-system/kube-proxy-4kpcq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4kpcq"
	E0819 11:33:11.221977       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4kpcq\": pod kube-proxy-4kpcq is already assigned to node \"ha-503856-m04\"" pod="kube-system/kube-proxy-4kpcq"
	I0819 11:33:11.222009       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4kpcq" node="ha-503856-m04"
	E0819 11:33:11.260369       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5zzk5\": pod kube-proxy-5zzk5 is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5zzk5" node="ha-503856-m04"
	E0819 11:33:11.260439       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 29216c29-6ceb-411d-a714-c94d674aed3f(kube-system/kube-proxy-5zzk5) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5zzk5"
	E0819 11:33:11.260454       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5zzk5\": pod kube-proxy-5zzk5 is already assigned to node \"ha-503856-m04\"" pod="kube-system/kube-proxy-5zzk5"
	I0819 11:33:11.260471       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5zzk5" node="ha-503856-m04"
	
	
	==> kubelet <==
	Aug 19 11:34:55 ha-503856 kubelet[1331]: E0819 11:34:55.988793    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067295987535691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:05 ha-503856 kubelet[1331]: E0819 11:35:05.895861    1331 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 11:35:05 ha-503856 kubelet[1331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 11:35:05 ha-503856 kubelet[1331]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 11:35:05 ha-503856 kubelet[1331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 11:35:05 ha-503856 kubelet[1331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 11:35:05 ha-503856 kubelet[1331]: E0819 11:35:05.990936    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067305990532907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:05 ha-503856 kubelet[1331]: E0819 11:35:05.990972    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067305990532907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:15 ha-503856 kubelet[1331]: E0819 11:35:15.993595    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067315993196389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:15 ha-503856 kubelet[1331]: E0819 11:35:15.993637    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067315993196389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:25 ha-503856 kubelet[1331]: E0819 11:35:25.995266    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067325994874336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:25 ha-503856 kubelet[1331]: E0819 11:35:25.995621    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067325994874336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:35 ha-503856 kubelet[1331]: E0819 11:35:35.997459    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067335996811970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:35 ha-503856 kubelet[1331]: E0819 11:35:35.997712    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067335996811970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:45 ha-503856 kubelet[1331]: E0819 11:35:45.999233    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067345998875993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:45 ha-503856 kubelet[1331]: E0819 11:35:45.999261    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067345998875993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:56 ha-503856 kubelet[1331]: E0819 11:35:56.000660    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067356000397456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:35:56 ha-503856 kubelet[1331]: E0819 11:35:56.000699    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067356000397456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:05 ha-503856 kubelet[1331]: E0819 11:36:05.899682    1331 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 11:36:05 ha-503856 kubelet[1331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 11:36:05 ha-503856 kubelet[1331]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 11:36:05 ha-503856 kubelet[1331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 11:36:05 ha-503856 kubelet[1331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 11:36:06 ha-503856 kubelet[1331]: E0819 11:36:06.002213    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067366001919127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:06 ha-503856 kubelet[1331]: E0819 11:36:06.002270    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067366001919127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-503856 -n ha-503856
helpers_test.go:261: (dbg) Run:  kubectl --context ha-503856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 3 (3.209831308s)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:15.984788  126121 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:15.985045  126121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:15.985055  126121 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:15.985062  126121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:15.985235  126121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:36:15.985424  126121 out.go:352] Setting JSON to false
	I0819 11:36:15.985456  126121 mustload.go:65] Loading cluster: ha-503856
	I0819 11:36:15.985509  126121 notify.go:220] Checking for updates...
	I0819 11:36:15.985870  126121 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:36:15.985888  126121 status.go:255] checking status of ha-503856 ...
	I0819 11:36:15.986292  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:15.986364  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:16.006166  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I0819 11:36:16.006586  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:16.007196  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:16.007221  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:16.007567  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:16.007780  126121 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:36:16.009365  126121 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:36:16.009395  126121 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:16.009688  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:16.009724  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:16.026162  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I0819 11:36:16.026620  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:16.027311  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:16.027358  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:16.027748  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:16.027989  126121 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:36:16.031196  126121 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:16.031763  126121 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:16.031795  126121 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:16.031952  126121 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:16.032278  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:16.032318  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:16.049230  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0819 11:36:16.049681  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:16.050124  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:16.050146  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:16.050427  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:16.050628  126121 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:36:16.050841  126121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:16.050865  126121 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:36:16.053909  126121 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:16.054445  126121 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:16.054475  126121 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:16.054629  126121 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:36:16.054812  126121 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:36:16.054953  126121 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:36:16.055113  126121 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:36:16.139958  126121 ssh_runner.go:195] Run: systemctl --version
	I0819 11:36:16.146006  126121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:16.160225  126121 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:16.160261  126121 api_server.go:166] Checking apiserver status ...
	I0819 11:36:16.160304  126121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:16.174538  126121 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:36:16.189311  126121 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:16.189374  126121 ssh_runner.go:195] Run: ls
	I0819 11:36:16.193916  126121 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:16.197946  126121 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:16.197973  126121 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:36:16.197984  126121 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:16.198004  126121 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:36:16.198347  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:16.198377  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:16.213556  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0819 11:36:16.213955  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:16.214440  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:16.214472  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:16.214830  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:16.215018  126121 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:36:16.216550  126121 status.go:330] ha-503856-m02 host status = "Running" (err=<nil>)
	I0819 11:36:16.216567  126121 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:16.216859  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:16.216885  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:16.232980  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I0819 11:36:16.233402  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:16.233898  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:16.233920  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:16.234270  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:16.234490  126121 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:36:16.237131  126121 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:16.237521  126121 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:16.237551  126121 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:16.237666  126121 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:16.238054  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:16.238094  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:16.254682  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0819 11:36:16.255179  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:16.255704  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:16.255738  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:16.256075  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:16.256278  126121 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:36:16.256484  126121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:16.256508  126121 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:36:16.259097  126121 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:16.259496  126121 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:16.259541  126121 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:16.259612  126121 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:36:16.259893  126121 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:36:16.260089  126121 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:36:16.260252  126121 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	W0819 11:36:18.800054  126121 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:18.800151  126121 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0819 11:36:18.800175  126121 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:18.800184  126121 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 11:36:18.800205  126121 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:18.800212  126121 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:36:18.800550  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:18.800581  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:18.816490  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0819 11:36:18.816965  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:18.817472  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:18.817497  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:18.817827  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:18.818007  126121 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:36:18.819545  126121 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:36:18.819565  126121 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:18.819959  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:18.820001  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:18.835169  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0819 11:36:18.835673  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:18.836227  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:18.836248  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:18.836613  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:18.836802  126121 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:36:18.839934  126121 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:18.840399  126121 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:18.840434  126121 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:18.840604  126121 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:18.840933  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:18.840970  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:18.856626  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0819 11:36:18.857004  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:18.857444  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:18.857465  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:18.857849  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:18.858075  126121 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:36:18.858259  126121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:18.858283  126121 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:36:18.861612  126121 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:18.862027  126121 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:18.862059  126121 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:18.862216  126121 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:36:18.862413  126121 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:36:18.862582  126121 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:36:18.862733  126121 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:36:18.943166  126121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:18.956630  126121 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:18.956659  126121 api_server.go:166] Checking apiserver status ...
	I0819 11:36:18.956718  126121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:18.970026  126121 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:36:18.979389  126121 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:18.979441  126121 ssh_runner.go:195] Run: ls
	I0819 11:36:18.983705  126121 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:18.990640  126121 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:18.990669  126121 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:36:18.990692  126121 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:18.990710  126121 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:36:18.991015  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:18.991044  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:19.008022  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0819 11:36:19.008494  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:19.008999  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:19.009020  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:19.009301  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:19.009456  126121 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:36:19.011180  126121 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:36:19.011200  126121 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:19.011514  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:19.011544  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:19.026734  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37005
	I0819 11:36:19.027263  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:19.027845  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:19.027866  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:19.028226  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:19.028392  126121 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:36:19.031500  126121 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:19.031971  126121 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:19.032006  126121 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:19.032142  126121 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:19.032573  126121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:19.032624  126121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:19.048129  126121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0819 11:36:19.048584  126121 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:19.049073  126121 main.go:141] libmachine: Using API Version  1
	I0819 11:36:19.049098  126121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:19.049417  126121 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:19.049623  126121 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:36:19.049811  126121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:19.049830  126121 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:36:19.053361  126121 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:19.053875  126121 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:19.053905  126121 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:19.054078  126121 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:36:19.054301  126121 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:36:19.054471  126121 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:36:19.054666  126121 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:36:19.134836  126121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:19.149996  126121 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0819 11:36:19.209647  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 3 (4.932561678s)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:20.402157  126221 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:20.402392  126221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:20.402400  126221 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:20.402405  126221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:20.402587  126221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:36:20.402748  126221 out.go:352] Setting JSON to false
	I0819 11:36:20.402775  126221 mustload.go:65] Loading cluster: ha-503856
	I0819 11:36:20.402819  126221 notify.go:220] Checking for updates...
	I0819 11:36:20.403122  126221 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:36:20.403136  126221 status.go:255] checking status of ha-503856 ...
	I0819 11:36:20.403590  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:20.403638  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:20.419392  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0819 11:36:20.419972  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:20.420528  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:20.420553  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:20.420893  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:20.421096  126221 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:36:20.422854  126221 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:36:20.422875  126221 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:20.423169  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:20.423210  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:20.439528  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43861
	I0819 11:36:20.439995  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:20.440502  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:20.440528  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:20.440982  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:20.441229  126221 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:36:20.444490  126221 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:20.444886  126221 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:20.444918  126221 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:20.445103  126221 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:20.445473  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:20.445534  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:20.464544  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0819 11:36:20.465031  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:20.465575  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:20.465597  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:20.465937  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:20.466169  126221 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:36:20.466369  126221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:20.466417  126221 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:36:20.469756  126221 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:20.470312  126221 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:20.470339  126221 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:20.470530  126221 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:36:20.470739  126221 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:36:20.470903  126221 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:36:20.471057  126221 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:36:20.551895  126221 ssh_runner.go:195] Run: systemctl --version
	I0819 11:36:20.559388  126221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:20.573850  126221 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:20.573889  126221 api_server.go:166] Checking apiserver status ...
	I0819 11:36:20.573936  126221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:20.589629  126221 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:36:20.599231  126221 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:20.599287  126221 ssh_runner.go:195] Run: ls
	I0819 11:36:20.603578  126221 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:20.608149  126221 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:20.608211  126221 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:36:20.608223  126221 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:20.608254  126221 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:36:20.608674  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:20.608713  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:20.624281  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41087
	I0819 11:36:20.624716  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:20.625292  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:20.625317  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:20.625691  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:20.625923  126221 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:36:20.627549  126221 status.go:330] ha-503856-m02 host status = "Running" (err=<nil>)
	I0819 11:36:20.627567  126221 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:20.627922  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:20.627981  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:20.643154  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0819 11:36:20.643593  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:20.644131  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:20.644166  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:20.644494  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:20.644693  126221 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:36:20.647640  126221 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:20.648267  126221 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:20.648292  126221 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:20.648464  126221 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:20.648795  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:20.648840  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:20.664963  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I0819 11:36:20.665417  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:20.665936  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:20.665956  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:20.666271  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:20.666492  126221 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:36:20.666666  126221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:20.666695  126221 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:36:20.669491  126221 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:20.669921  126221 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:20.669949  126221 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:20.670139  126221 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:36:20.670335  126221 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:36:20.670499  126221 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:36:20.670656  126221 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	W0819 11:36:21.872476  126221 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:21.872549  126221 retry.go:31] will retry after 262.099239ms: dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:24.940053  126221 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:24.940177  126221 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0819 11:36:24.940207  126221 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:24.940216  126221 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 11:36:24.940239  126221 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:24.940252  126221 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:36:24.940608  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:24.940656  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:24.955992  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34919
	I0819 11:36:24.956476  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:24.956950  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:24.956981  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:24.957350  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:24.957572  126221 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:36:24.959306  126221 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:36:24.959325  126221 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:24.959644  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:24.959681  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:24.975663  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0819 11:36:24.976133  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:24.976553  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:24.976574  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:24.976924  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:24.977103  126221 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:36:24.979876  126221 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:24.980236  126221 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:24.980261  126221 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:24.980444  126221 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:24.980798  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:24.980850  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:24.996296  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0819 11:36:24.996806  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:24.997322  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:24.997362  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:24.997683  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:24.997858  126221 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:36:24.998063  126221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:24.998086  126221 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:36:25.001009  126221 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:25.001404  126221 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:25.001446  126221 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:25.001586  126221 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:36:25.001773  126221 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:36:25.001926  126221 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:36:25.002067  126221 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:36:25.079456  126221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:25.096369  126221 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:25.096399  126221 api_server.go:166] Checking apiserver status ...
	I0819 11:36:25.096443  126221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:25.110708  126221 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:36:25.120523  126221 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:25.120585  126221 ssh_runner.go:195] Run: ls
	I0819 11:36:25.125051  126221 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:25.129238  126221 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:25.129268  126221 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:36:25.129280  126221 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:25.129300  126221 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:36:25.129652  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:25.129675  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:25.145104  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I0819 11:36:25.145562  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:25.146042  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:25.146062  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:25.146396  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:25.146645  126221 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:36:25.148293  126221 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:36:25.148313  126221 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:25.148621  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:25.148649  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:25.165802  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0819 11:36:25.166282  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:25.166866  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:25.166893  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:25.167319  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:25.167541  126221 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:36:25.170234  126221 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:25.170628  126221 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:25.170658  126221 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:25.170787  126221 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:25.171099  126221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:25.171141  126221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:25.186374  126221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0819 11:36:25.186850  126221 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:25.187353  126221 main.go:141] libmachine: Using API Version  1
	I0819 11:36:25.187375  126221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:25.187798  126221 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:25.188010  126221 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:36:25.188226  126221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:25.188250  126221 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:36:25.190849  126221 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:25.191238  126221 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:25.191267  126221 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:25.191408  126221 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:36:25.191589  126221 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:36:25.191749  126221 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:36:25.191905  126221 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:36:25.271056  126221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:25.286678  126221 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 3 (4.742876736s)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:26.995017  126337 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:26.995265  126337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:26.995274  126337 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:26.995278  126337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:26.995435  126337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:36:26.995605  126337 out.go:352] Setting JSON to false
	I0819 11:36:26.995631  126337 mustload.go:65] Loading cluster: ha-503856
	I0819 11:36:26.995745  126337 notify.go:220] Checking for updates...
	I0819 11:36:26.996067  126337 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:36:26.996086  126337 status.go:255] checking status of ha-503856 ...
	I0819 11:36:26.996540  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:26.996616  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:27.016861  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0819 11:36:27.017371  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:27.017930  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:27.017960  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:27.018352  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:27.018557  126337 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:36:27.020432  126337 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:36:27.020456  126337 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:27.020801  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:27.020850  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:27.037366  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0819 11:36:27.037861  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:27.038363  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:27.038390  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:27.038749  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:27.038984  126337 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:36:27.041992  126337 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:27.042551  126337 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:27.042585  126337 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:27.042794  126337 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:27.043140  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:27.043187  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:27.058823  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0819 11:36:27.059359  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:27.059935  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:27.059957  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:27.060283  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:27.060476  126337 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:36:27.060713  126337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:27.060748  126337 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:36:27.063630  126337 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:27.064084  126337 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:27.064115  126337 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:27.064271  126337 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:36:27.064501  126337 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:36:27.064638  126337 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:36:27.064767  126337 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:36:27.147659  126337 ssh_runner.go:195] Run: systemctl --version
	I0819 11:36:27.153616  126337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:27.168977  126337 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:27.169018  126337 api_server.go:166] Checking apiserver status ...
	I0819 11:36:27.169072  126337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:27.182830  126337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:36:27.192499  126337 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:27.192577  126337 ssh_runner.go:195] Run: ls
	I0819 11:36:27.197151  126337 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:27.203394  126337 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:27.203424  126337 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:36:27.203436  126337 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:27.203458  126337 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:36:27.203785  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:27.203841  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:27.219221  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34301
	I0819 11:36:27.219683  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:27.220152  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:27.220174  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:27.220524  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:27.220725  126337 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:36:27.222551  126337 status.go:330] ha-503856-m02 host status = "Running" (err=<nil>)
	I0819 11:36:27.222573  126337 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:27.222993  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:27.223045  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:27.239151  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I0819 11:36:27.239622  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:27.240096  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:27.240117  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:27.240476  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:27.240689  126337 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:36:27.243410  126337 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:27.243860  126337 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:27.243901  126337 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:27.244054  126337 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:27.244460  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:27.244505  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:27.259986  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
	I0819 11:36:27.260453  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:27.260916  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:27.260935  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:27.261270  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:27.261514  126337 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:36:27.261707  126337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:27.261729  126337 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:36:27.264626  126337 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:27.265162  126337 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:27.265190  126337 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:27.265410  126337 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:36:27.265610  126337 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:36:27.265894  126337 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:36:27.266118  126337 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	W0819 11:36:28.011994  126337 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:28.012047  126337 retry.go:31] will retry after 255.383768ms: dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:31.340018  126337 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:31.340147  126337 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0819 11:36:31.340176  126337 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:31.340187  126337 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 11:36:31.340211  126337 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:31.340222  126337 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:36:31.340656  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:31.340721  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:31.356588  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I0819 11:36:31.357086  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:31.357626  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:31.357652  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:31.357980  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:31.358225  126337 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:36:31.360016  126337 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:36:31.360033  126337 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:31.360360  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:31.360405  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:31.375572  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I0819 11:36:31.376067  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:31.376560  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:31.376582  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:31.376905  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:31.377077  126337 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:36:31.379938  126337 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:31.380461  126337 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:31.380492  126337 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:31.380650  126337 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:31.381000  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:31.381041  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:31.397521  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0819 11:36:31.398036  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:31.398587  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:31.398612  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:31.398971  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:31.399170  126337 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:36:31.399405  126337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:31.399429  126337 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:36:31.402388  126337 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:31.402769  126337 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:31.402806  126337 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:31.402951  126337 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:36:31.403151  126337 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:36:31.403327  126337 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:36:31.403484  126337 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:36:31.483652  126337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:31.497933  126337 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:31.497961  126337 api_server.go:166] Checking apiserver status ...
	I0819 11:36:31.497994  126337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:31.512047  126337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:36:31.521848  126337 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:31.521916  126337 ssh_runner.go:195] Run: ls
	I0819 11:36:31.526390  126337 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:31.530589  126337 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:31.530614  126337 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:36:31.530623  126337 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:31.530646  126337 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:36:31.530937  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:31.530961  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:31.547434  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0819 11:36:31.547975  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:31.548563  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:31.548590  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:31.548926  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:31.549217  126337 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:36:31.550985  126337 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:36:31.551007  126337 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:31.551283  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:31.551320  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:31.567585  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I0819 11:36:31.568119  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:31.568604  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:31.568626  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:31.568917  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:31.569125  126337 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:36:31.572069  126337 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:31.572493  126337 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:31.572514  126337 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:31.572714  126337 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:31.573002  126337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:31.573056  126337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:31.589112  126337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0819 11:36:31.589598  126337 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:31.590042  126337 main.go:141] libmachine: Using API Version  1
	I0819 11:36:31.590059  126337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:31.590337  126337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:31.590576  126337 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:36:31.590740  126337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:31.590756  126337 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:36:31.593582  126337 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:31.594037  126337 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:31.594078  126337 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:31.594244  126337 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:36:31.594483  126337 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:36:31.594702  126337 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:36:31.594879  126337 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:36:31.675218  126337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:31.691322  126337 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 3 (4.307385768s)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:33.759321  126437 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:33.759516  126437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:33.759544  126437 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:33.759561  126437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:33.760043  126437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:36:33.760310  126437 out.go:352] Setting JSON to false
	I0819 11:36:33.760346  126437 mustload.go:65] Loading cluster: ha-503856
	I0819 11:36:33.760445  126437 notify.go:220] Checking for updates...
	I0819 11:36:33.760759  126437 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:36:33.760776  126437 status.go:255] checking status of ha-503856 ...
	I0819 11:36:33.761218  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:33.761262  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:33.777165  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0819 11:36:33.777676  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:33.778348  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:33.778390  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:33.778748  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:33.778946  126437 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:36:33.780704  126437 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:36:33.780728  126437 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:33.781043  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:33.781079  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:33.796734  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35211
	I0819 11:36:33.797284  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:33.797890  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:33.797928  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:33.798268  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:33.798473  126437 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:36:33.801650  126437 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:33.802098  126437 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:33.802132  126437 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:33.802340  126437 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:33.802726  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:33.802807  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:33.819295  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I0819 11:36:33.820189  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:33.820845  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:33.820872  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:33.821246  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:33.821486  126437 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:36:33.821712  126437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:33.821754  126437 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:36:33.824625  126437 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:33.824988  126437 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:33.825010  126437 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:33.825185  126437 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:36:33.825366  126437 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:36:33.825564  126437 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:36:33.825720  126437 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:36:33.911181  126437 ssh_runner.go:195] Run: systemctl --version
	I0819 11:36:33.919484  126437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:33.934579  126437 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:33.934616  126437 api_server.go:166] Checking apiserver status ...
	I0819 11:36:33.934662  126437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:33.948322  126437 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:36:33.957913  126437 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:33.957965  126437 ssh_runner.go:195] Run: ls
	I0819 11:36:33.962021  126437 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:33.966295  126437 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:33.966335  126437 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:36:33.966350  126437 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:33.966377  126437 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:36:33.966686  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:33.966714  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:33.982874  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39699
	I0819 11:36:33.983443  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:33.984037  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:33.984066  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:33.984389  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:33.984584  126437 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:36:33.986279  126437 status.go:330] ha-503856-m02 host status = "Running" (err=<nil>)
	I0819 11:36:33.986304  126437 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:33.986687  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:33.986736  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:34.001964  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0819 11:36:34.002416  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:34.002955  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:34.002986  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:34.003416  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:34.003638  126437 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:36:34.006776  126437 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:34.007387  126437 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:34.007416  126437 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:34.007604  126437 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:34.007937  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:34.007977  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:34.023510  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0819 11:36:34.023991  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:34.024466  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:34.024488  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:34.024861  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:34.025038  126437 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:36:34.025249  126437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:34.025269  126437 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:36:34.028339  126437 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:34.028744  126437 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:34.028773  126437 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:34.028913  126437 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:36:34.029091  126437 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:36:34.029264  126437 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:36:34.029420  126437 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	W0819 11:36:34.412098  126437 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:34.412176  126437 retry.go:31] will retry after 206.570242ms: dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:37.676036  126437 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:37.676146  126437 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0819 11:36:37.676166  126437 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:37.676173  126437 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 11:36:37.676196  126437 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:37.676207  126437 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:36:37.676535  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:37.676572  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:37.692128  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0819 11:36:37.692632  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:37.693230  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:37.693257  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:37.693668  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:37.693932  126437 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:36:37.695782  126437 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:36:37.695801  126437 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:37.696109  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:37.696157  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:37.711348  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0819 11:36:37.711807  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:37.712276  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:37.712298  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:37.712652  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:37.712867  126437 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:36:37.715696  126437 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:37.716127  126437 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:37.716160  126437 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:37.716324  126437 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:37.716668  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:37.716709  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:37.732385  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0819 11:36:37.732805  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:37.733290  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:37.733314  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:37.733602  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:37.733781  126437 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:36:37.734005  126437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:37.734026  126437 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:36:37.737063  126437 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:37.737649  126437 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:37.737684  126437 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:37.737882  126437 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:36:37.738094  126437 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:36:37.738287  126437 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:36:37.738488  126437 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:36:37.815338  126437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:37.829896  126437 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:37.829925  126437 api_server.go:166] Checking apiserver status ...
	I0819 11:36:37.829958  126437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:37.846739  126437 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:36:37.857412  126437 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:37.857468  126437 ssh_runner.go:195] Run: ls
	I0819 11:36:37.861598  126437 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:37.866247  126437 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:37.866276  126437 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:36:37.866288  126437 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:37.866310  126437 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:36:37.866709  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:37.866744  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:37.881923  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46669
	I0819 11:36:37.882393  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:37.882873  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:37.882899  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:37.883181  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:37.883332  126437 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:36:37.884908  126437 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:36:37.884924  126437 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:37.885204  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:37.885238  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:37.900593  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0819 11:36:37.901020  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:37.901487  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:37.901507  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:37.901830  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:37.902041  126437 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:36:37.904828  126437 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:37.905179  126437 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:37.905204  126437 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:37.905360  126437 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:37.905673  126437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:37.905713  126437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:37.922513  126437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0819 11:36:37.922936  126437 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:37.923385  126437 main.go:141] libmachine: Using API Version  1
	I0819 11:36:37.923405  126437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:37.923743  126437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:37.923952  126437 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:36:37.924145  126437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:37.924162  126437 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:36:37.926897  126437 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:37.927338  126437 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:37.927368  126437 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:37.927562  126437 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:36:37.927797  126437 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:36:37.927969  126437 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:36:37.928108  126437 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:36:38.007091  126437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:38.021653  126437 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 3 (4.784407849s)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:39.778107  126538 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:39.778363  126538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:39.778374  126538 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:39.778378  126538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:39.778589  126538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:36:39.778809  126538 out.go:352] Setting JSON to false
	I0819 11:36:39.778839  126538 mustload.go:65] Loading cluster: ha-503856
	I0819 11:36:39.778946  126538 notify.go:220] Checking for updates...
	I0819 11:36:39.780055  126538 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:36:39.780132  126538 status.go:255] checking status of ha-503856 ...
	I0819 11:36:39.780995  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:39.781034  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:39.796343  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0819 11:36:39.796848  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:39.797341  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:39.797364  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:39.797766  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:39.798046  126538 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:36:39.799661  126538 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:36:39.799685  126538 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:39.800005  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:39.800055  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:39.816228  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44263
	I0819 11:36:39.816756  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:39.817327  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:39.817350  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:39.817680  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:39.817865  126538 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:36:39.820578  126538 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:39.820979  126538 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:39.821013  126538 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:39.821133  126538 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:39.821467  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:39.821514  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:39.836591  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I0819 11:36:39.837056  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:39.837571  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:39.837590  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:39.837929  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:39.838104  126538 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:36:39.838265  126538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:39.838288  126538 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:36:39.841300  126538 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:39.841683  126538 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:39.841716  126538 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:39.841925  126538 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:36:39.842140  126538 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:36:39.842283  126538 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:36:39.842428  126538 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:36:39.923416  126538 ssh_runner.go:195] Run: systemctl --version
	I0819 11:36:39.929590  126538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:39.944270  126538 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:39.944308  126538 api_server.go:166] Checking apiserver status ...
	I0819 11:36:39.944348  126538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:39.958725  126538 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:36:39.969022  126538 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:39.969083  126538 ssh_runner.go:195] Run: ls
	I0819 11:36:39.973580  126538 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:39.977796  126538 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:39.977825  126538 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:36:39.977837  126538 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:39.977857  126538 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:36:39.978160  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:39.978190  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:39.993613  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0819 11:36:39.994193  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:39.994708  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:39.994731  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:39.995064  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:39.995292  126538 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:36:39.997034  126538 status.go:330] ha-503856-m02 host status = "Running" (err=<nil>)
	I0819 11:36:39.997051  126538 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:39.997335  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:39.997358  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:40.012981  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0819 11:36:40.013398  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:40.013893  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:40.013916  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:40.014301  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:40.014506  126538 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:36:40.017478  126538 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:40.017957  126538 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:40.017990  126538 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:40.018119  126538 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:40.018506  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:40.018537  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:40.034167  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I0819 11:36:40.034622  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:40.035115  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:40.035142  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:40.035516  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:40.035758  126538 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:36:40.035954  126538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:40.035977  126538 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:36:40.038916  126538 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:40.039345  126538 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:40.039375  126538 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:40.039493  126538 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:36:40.039666  126538 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:36:40.039856  126538 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:36:40.040019  126538 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	W0819 11:36:40.751936  126538 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:40.751986  126538 retry.go:31] will retry after 351.96441ms: dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:44.172039  126538 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:44.172126  126538 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0819 11:36:44.172146  126538 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:44.172167  126538 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 11:36:44.172185  126538 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:44.172192  126538 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:36:44.172487  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:44.172528  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:44.188440  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0819 11:36:44.188914  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:44.189481  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:44.189507  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:44.189835  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:44.190064  126538 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:36:44.191827  126538 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:36:44.191844  126538 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:44.192128  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:44.192167  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:44.207312  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0819 11:36:44.207763  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:44.208305  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:44.208335  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:44.208671  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:44.208897  126538 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:36:44.212313  126538 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:44.212726  126538 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:44.212754  126538 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:44.212965  126538 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:44.213383  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:44.213437  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:44.228951  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40101
	I0819 11:36:44.229461  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:44.229965  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:44.229985  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:44.230352  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:44.230533  126538 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:36:44.230720  126538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:44.230741  126538 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:36:44.233701  126538 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:44.234156  126538 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:44.234191  126538 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:44.234348  126538 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:36:44.234561  126538 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:36:44.234727  126538 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:36:44.234878  126538 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:36:44.310965  126538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:44.326834  126538 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:44.326865  126538 api_server.go:166] Checking apiserver status ...
	I0819 11:36:44.326900  126538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:44.341501  126538 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:36:44.351195  126538 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:44.351262  126538 ssh_runner.go:195] Run: ls
	I0819 11:36:44.355531  126538 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:44.359865  126538 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:44.359895  126538 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:36:44.359906  126538 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:44.359928  126538 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:36:44.360256  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:44.360286  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:44.375304  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I0819 11:36:44.375809  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:44.376267  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:44.376292  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:44.376590  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:44.376772  126538 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:36:44.378300  126538 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:36:44.378315  126538 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:44.378687  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:44.378717  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:44.395556  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0819 11:36:44.396068  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:44.396565  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:44.396591  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:44.396925  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:44.397119  126538 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:36:44.400094  126538 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:44.400638  126538 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:44.400667  126538 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:44.400805  126538 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:44.401096  126538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:44.401134  126538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:44.416970  126538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0819 11:36:44.417416  126538 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:44.417927  126538 main.go:141] libmachine: Using API Version  1
	I0819 11:36:44.417951  126538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:44.418249  126538 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:44.418426  126538 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:36:44.418608  126538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:44.418626  126538 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:36:44.421683  126538 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:44.422190  126538 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:44.422220  126538 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:44.422385  126538 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:36:44.422580  126538 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:36:44.422715  126538 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:36:44.422883  126538 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:36:44.502774  126538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:44.517035  126538 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 3 (3.755523367s)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:47.765085  126655 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:47.765327  126655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:47.765336  126655 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:47.765340  126655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:47.765543  126655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:36:47.765704  126655 out.go:352] Setting JSON to false
	I0819 11:36:47.765730  126655 mustload.go:65] Loading cluster: ha-503856
	I0819 11:36:47.765841  126655 notify.go:220] Checking for updates...
	I0819 11:36:47.766116  126655 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:36:47.766132  126655 status.go:255] checking status of ha-503856 ...
	I0819 11:36:47.766583  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:47.766655  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:47.788139  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34869
	I0819 11:36:47.788640  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:47.789219  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:47.789249  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:47.789636  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:47.789835  126655 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:36:47.791556  126655 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:36:47.791576  126655 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:47.791930  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:47.791977  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:47.807366  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I0819 11:36:47.807799  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:47.808291  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:47.808319  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:47.808623  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:47.808832  126655 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:36:47.811584  126655 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:47.811997  126655 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:47.812066  126655 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:47.812163  126655 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:47.812485  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:47.812566  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:47.829749  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45479
	I0819 11:36:47.830235  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:47.830799  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:47.830826  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:47.831244  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:47.831467  126655 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:36:47.831714  126655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:47.831784  126655 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:36:47.834714  126655 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:47.835162  126655 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:47.835179  126655 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:47.835360  126655 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:36:47.835560  126655 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:36:47.835766  126655 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:36:47.835931  126655 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:36:47.923399  126655 ssh_runner.go:195] Run: systemctl --version
	I0819 11:36:47.929914  126655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:47.946868  126655 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:47.946908  126655 api_server.go:166] Checking apiserver status ...
	I0819 11:36:47.946944  126655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:47.960902  126655 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:36:47.971501  126655 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:47.971576  126655 ssh_runner.go:195] Run: ls
	I0819 11:36:47.975984  126655 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:47.980206  126655 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:47.980237  126655 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:36:47.980250  126655 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:47.980272  126655 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:36:47.980693  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:47.980740  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:47.996482  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I0819 11:36:47.997006  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:47.997516  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:47.997539  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:47.997904  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:47.998111  126655 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:36:47.999960  126655 status.go:330] ha-503856-m02 host status = "Running" (err=<nil>)
	I0819 11:36:47.999978  126655 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:48.000255  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:48.000290  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:48.016659  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0819 11:36:48.017101  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:48.017600  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:48.017622  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:48.017957  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:48.018164  126655 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:36:48.021273  126655 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:48.021800  126655 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:48.021822  126655 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:48.021985  126655 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:36:48.022285  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:48.022322  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:48.038079  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I0819 11:36:48.038498  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:48.039011  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:48.039039  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:48.039353  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:48.039527  126655 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:36:48.039766  126655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:48.039792  126655 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:36:48.042388  126655 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:48.042839  126655 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:36:48.042874  126655 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:36:48.043004  126655 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:36:48.043192  126655 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:36:48.043352  126655 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:36:48.043546  126655 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	W0819 11:36:51.115997  126655 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	W0819 11:36:51.116096  126655 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0819 11:36:51.116120  126655 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:51.116133  126655 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 11:36:51.116158  126655 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0819 11:36:51.116191  126655 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:36:51.116611  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:51.116669  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:51.131964  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34321
	I0819 11:36:51.132434  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:51.132909  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:51.132925  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:51.133276  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:51.133502  126655 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:36:51.135171  126655 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:36:51.135188  126655 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:51.135461  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:51.135499  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:51.150654  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0819 11:36:51.151110  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:51.151554  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:51.151570  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:51.151899  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:51.152080  126655 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:36:51.155002  126655 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:51.155406  126655 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:51.155435  126655 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:51.155565  126655 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:51.156027  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:51.156071  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:51.172769  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0819 11:36:51.173234  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:51.173828  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:51.173855  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:51.174211  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:51.174421  126655 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:36:51.174606  126655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:51.174630  126655 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:36:51.177357  126655 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:51.177709  126655 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:51.177739  126655 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:51.177872  126655 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:36:51.178046  126655 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:36:51.178228  126655 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:36:51.178353  126655 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:36:51.259389  126655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:51.274358  126655 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:51.274387  126655 api_server.go:166] Checking apiserver status ...
	I0819 11:36:51.274420  126655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:51.288526  126655 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:36:51.299591  126655 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:51.299663  126655 ssh_runner.go:195] Run: ls
	I0819 11:36:51.304151  126655 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:51.309376  126655 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:51.309462  126655 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:36:51.309486  126655 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:51.309515  126655 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:36:51.309893  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:51.309943  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:51.325981  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0819 11:36:51.326500  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:51.327018  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:51.327040  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:51.327379  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:51.327592  126655 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:36:51.329685  126655 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:36:51.329711  126655 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:51.330015  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:51.330065  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:51.345741  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
	I0819 11:36:51.346223  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:51.346679  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:51.346705  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:51.347163  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:51.347365  126655 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:36:51.350332  126655 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:51.350871  126655 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:51.350902  126655 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:51.351121  126655 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:51.351419  126655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:51.351458  126655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:51.366713  126655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41721
	I0819 11:36:51.367151  126655 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:51.367677  126655 main.go:141] libmachine: Using API Version  1
	I0819 11:36:51.367701  126655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:51.368067  126655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:51.368327  126655 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:36:51.368580  126655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:51.368606  126655 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:36:51.371493  126655 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:51.371908  126655 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:51.371942  126655 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:51.372105  126655 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:36:51.372308  126655 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:36:51.372453  126655 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:36:51.372576  126655 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:36:51.455115  126655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:51.469452  126655 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 7 (634.8169ms)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:57.716129  126791 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:57.716747  126791 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:57.716766  126791 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:57.716773  126791 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:57.717218  126791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:36:57.717753  126791 out.go:352] Setting JSON to false
	I0819 11:36:57.717784  126791 mustload.go:65] Loading cluster: ha-503856
	I0819 11:36:57.717892  126791 notify.go:220] Checking for updates...
	I0819 11:36:57.718198  126791 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:36:57.718213  126791 status.go:255] checking status of ha-503856 ...
	I0819 11:36:57.718564  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:57.718606  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:57.735382  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0819 11:36:57.735851  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:57.736448  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:57.736475  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:57.736941  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:57.737152  126791 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:36:57.739190  126791 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:36:57.739210  126791 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:57.739525  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:57.739572  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:57.756363  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0819 11:36:57.756841  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:57.757301  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:57.757327  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:57.757648  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:57.757915  126791 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:36:57.761373  126791 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:57.761887  126791 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:57.761922  126791 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:57.762068  126791 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:36:57.762459  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:57.762522  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:57.778743  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0819 11:36:57.779218  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:57.779797  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:57.779823  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:57.780143  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:57.780316  126791 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:36:57.780508  126791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:57.780530  126791 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:36:57.783467  126791 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:57.783947  126791 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:36:57.783974  126791 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:36:57.784118  126791 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:36:57.784282  126791 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:36:57.784417  126791 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:36:57.784551  126791 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:36:57.871657  126791 ssh_runner.go:195] Run: systemctl --version
	I0819 11:36:57.877676  126791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:57.896504  126791 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:57.896546  126791 api_server.go:166] Checking apiserver status ...
	I0819 11:36:57.896583  126791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:57.911100  126791 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:36:57.924430  126791 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:57.924519  126791 ssh_runner.go:195] Run: ls
	I0819 11:36:57.929100  126791 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:57.933414  126791 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:57.933441  126791 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:36:57.933452  126791 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:57.933475  126791 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:36:57.933872  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:57.933903  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:57.948867  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0819 11:36:57.949283  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:57.949741  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:57.949763  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:57.950088  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:57.950277  126791 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:36:57.951946  126791 status.go:330] ha-503856-m02 host status = "Stopped" (err=<nil>)
	I0819 11:36:57.951963  126791 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:57.951971  126791 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:57.951996  126791 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:36:57.952280  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:57.952315  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:57.967447  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0819 11:36:57.968030  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:57.968615  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:57.968641  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:57.968984  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:57.969213  126791 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:36:57.970824  126791 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:36:57.970840  126791 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:57.971121  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:57.971166  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:57.986460  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I0819 11:36:57.986914  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:57.987358  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:57.987381  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:57.987698  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:57.987875  126791 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:36:57.990942  126791 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:57.991348  126791 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:57.991381  126791 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:57.991569  126791 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:36:57.992002  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:57.992054  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:58.009459  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36945
	I0819 11:36:58.009873  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:58.010332  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:58.010354  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:58.010672  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:58.010881  126791 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:36:58.011074  126791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:58.011098  126791 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:36:58.013869  126791 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:58.014394  126791 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:36:58.014415  126791 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:36:58.014723  126791 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:36:58.014930  126791 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:36:58.015093  126791 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:36:58.015231  126791 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:36:58.095221  126791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:58.113276  126791 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:36:58.113306  126791 api_server.go:166] Checking apiserver status ...
	I0819 11:36:58.113343  126791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:36:58.126389  126791 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:36:58.136878  126791 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:36:58.136949  126791 ssh_runner.go:195] Run: ls
	I0819 11:36:58.141094  126791 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:36:58.145420  126791 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:36:58.145446  126791 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:36:58.145455  126791 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:36:58.145471  126791 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:36:58.145773  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:58.145797  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:58.160858  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38755
	I0819 11:36:58.161325  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:58.161837  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:58.161861  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:58.162226  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:58.162409  126791 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:36:58.164115  126791 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:36:58.164137  126791 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:58.164423  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:58.164471  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:58.179721  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0819 11:36:58.180208  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:58.180686  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:58.180717  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:58.181018  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:58.181252  126791 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:36:58.183830  126791 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:58.184264  126791 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:58.184286  126791 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:58.184505  126791 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:36:58.184921  126791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:36:58.184953  126791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:36:58.201876  126791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0819 11:36:58.202440  126791 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:36:58.202941  126791 main.go:141] libmachine: Using API Version  1
	I0819 11:36:58.202966  126791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:36:58.203361  126791 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:36:58.203554  126791 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:36:58.203786  126791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:36:58.203811  126791 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:36:58.206673  126791 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:58.207104  126791 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:36:58.207130  126791 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:36:58.207287  126791 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:36:58.207477  126791 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:36:58.207631  126791 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:36:58.207819  126791 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:36:58.290623  126791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:36:58.305174  126791 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 7 (630.30189ms)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-503856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:13.772227  126912 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:13.772536  126912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:13.772558  126912 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:13.772566  126912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:13.773041  126912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:37:13.773316  126912 out.go:352] Setting JSON to false
	I0819 11:37:13.773351  126912 mustload.go:65] Loading cluster: ha-503856
	I0819 11:37:13.773448  126912 notify.go:220] Checking for updates...
	I0819 11:37:13.773774  126912 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:37:13.773791  126912 status.go:255] checking status of ha-503856 ...
	I0819 11:37:13.774255  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:13.774321  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:13.790140  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43227
	I0819 11:37:13.790584  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:13.791358  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:13.791390  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:13.791864  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:13.792066  126912 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:37:13.793984  126912 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:37:13.794005  126912 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:37:13.794403  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:13.794469  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:13.816533  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0819 11:37:13.817100  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:13.817611  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:13.817631  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:13.818008  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:13.818229  126912 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:37:13.821318  126912 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:37:13.821776  126912 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:37:13.821806  126912 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:37:13.822016  126912 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:37:13.822402  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:13.822468  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:13.837952  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0819 11:37:13.838452  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:13.839007  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:13.839027  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:13.839364  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:13.839594  126912 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:37:13.839809  126912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:37:13.839830  126912 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:37:13.844030  126912 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:37:13.844528  126912 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:37:13.844558  126912 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:37:13.844730  126912 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:37:13.844966  126912 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:37:13.845132  126912 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:37:13.845293  126912 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:37:13.935044  126912 ssh_runner.go:195] Run: systemctl --version
	I0819 11:37:13.940602  126912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:37:13.955461  126912 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:37:13.955495  126912 api_server.go:166] Checking apiserver status ...
	I0819 11:37:13.955531  126912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:37:13.972025  126912 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0819 11:37:13.982192  126912 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:37:13.982248  126912 ssh_runner.go:195] Run: ls
	I0819 11:37:13.986430  126912 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:37:13.990522  126912 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:37:13.990553  126912 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:37:13.990567  126912 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:37:13.990590  126912 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:37:13.990902  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:13.990933  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:14.006861  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0819 11:37:14.007306  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:14.007899  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:14.007921  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:14.008228  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:14.008449  126912 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:37:14.009940  126912 status.go:330] ha-503856-m02 host status = "Stopped" (err=<nil>)
	I0819 11:37:14.009955  126912 status.go:343] host is not running, skipping remaining checks
	I0819 11:37:14.009960  126912 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:37:14.009983  126912 status.go:255] checking status of ha-503856-m03 ...
	I0819 11:37:14.010290  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:14.010314  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:14.026109  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0819 11:37:14.026517  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:14.026957  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:14.026983  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:14.027408  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:14.027670  126912 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:37:14.029309  126912 status.go:330] ha-503856-m03 host status = "Running" (err=<nil>)
	I0819 11:37:14.029327  126912 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:37:14.029745  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:14.029802  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:14.045779  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0819 11:37:14.046275  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:14.046778  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:14.046799  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:14.047191  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:14.047411  126912 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:37:14.050355  126912 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:37:14.050827  126912 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:37:14.050858  126912 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:37:14.051013  126912 host.go:66] Checking if "ha-503856-m03" exists ...
	I0819 11:37:14.051303  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:14.051342  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:14.066656  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46475
	I0819 11:37:14.067113  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:14.067596  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:14.067619  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:14.067941  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:14.068154  126912 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:37:14.068359  126912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:37:14.068380  126912 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:37:14.070948  126912 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:37:14.071363  126912 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:37:14.071388  126912 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:37:14.071554  126912 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:37:14.071743  126912 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:37:14.071896  126912 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:37:14.072033  126912 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:37:14.151237  126912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:37:14.166227  126912 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:37:14.166260  126912 api_server.go:166] Checking apiserver status ...
	I0819 11:37:14.166294  126912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:37:14.181078  126912 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	W0819 11:37:14.190598  126912 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:37:14.190663  126912 ssh_runner.go:195] Run: ls
	I0819 11:37:14.195074  126912 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:37:14.199495  126912 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:37:14.199525  126912 status.go:422] ha-503856-m03 apiserver status = Running (err=<nil>)
	I0819 11:37:14.199534  126912 status.go:257] ha-503856-m03 status: &{Name:ha-503856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:37:14.199552  126912 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:37:14.199993  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:14.200046  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:14.215662  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I0819 11:37:14.216232  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:14.216853  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:14.216879  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:14.217220  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:14.217440  126912 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:37:14.219175  126912 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:37:14.219194  126912 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:37:14.219497  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:14.219528  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:14.235630  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38635
	I0819 11:37:14.236194  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:14.236705  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:14.236729  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:14.237058  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:14.237260  126912 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:37:14.240436  126912 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:37:14.240853  126912 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:37:14.240881  126912 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:37:14.241109  126912 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:37:14.241413  126912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:14.241464  126912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:14.258142  126912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32835
	I0819 11:37:14.258579  126912 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:14.259131  126912 main.go:141] libmachine: Using API Version  1
	I0819 11:37:14.259160  126912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:14.259476  126912 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:14.259677  126912 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:37:14.259908  126912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:37:14.259939  126912 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:37:14.262710  126912 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:37:14.263156  126912 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:37:14.263182  126912 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:37:14.263303  126912 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:37:14.263522  126912 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:37:14.263701  126912 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:37:14.263859  126912 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:37:14.342750  126912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:37:14.357142  126912 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-503856 -n ha-503856
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-503856 logs -n 25: (1.314177427s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856:/home/docker/cp-test_ha-503856-m03_ha-503856.txt                       |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856 sudo cat                                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856.txt                                 |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m02:/home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m02 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04:/home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m04 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp testdata/cp-test.txt                                                | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856:/home/docker/cp-test_ha-503856-m04_ha-503856.txt                       |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856 sudo cat                                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856.txt                                 |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m02:/home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m02 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03:/home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m03 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-503856 node stop m02 -v=7                                                     | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-503856 node start m02 -v=7                                                    | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:29:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:29:25.023300  121308 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:25.023403  121308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:25.023407  121308 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:25.023411  121308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:25.023582  121308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:29:25.024191  121308 out.go:352] Setting JSON to false
	I0819 11:29:25.025110  121308 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4311,"bootTime":1724062654,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:29:25.025180  121308 start.go:139] virtualization: kvm guest
	I0819 11:29:25.027070  121308 out.go:177] * [ha-503856] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:29:25.028243  121308 notify.go:220] Checking for updates...
	I0819 11:29:25.028266  121308 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:29:25.029648  121308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:25.031060  121308 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:29:25.032384  121308 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:29:25.033691  121308 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:29:25.034902  121308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:25.036183  121308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:25.073335  121308 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 11:29:25.074656  121308 start.go:297] selected driver: kvm2
	I0819 11:29:25.074678  121308 start.go:901] validating driver "kvm2" against <nil>
	I0819 11:29:25.074695  121308 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:25.075514  121308 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:25.075622  121308 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 11:29:25.092588  121308 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 11:29:25.092642  121308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:29:25.092869  121308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:29:25.092924  121308 cni.go:84] Creating CNI manager for ""
	I0819 11:29:25.092932  121308 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 11:29:25.092940  121308 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:29:25.092984  121308 start.go:340] cluster config:
	{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:25.093092  121308 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:25.094757  121308 out.go:177] * Starting "ha-503856" primary control-plane node in "ha-503856" cluster
	I0819 11:29:25.096077  121308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:29:25.096125  121308 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:29:25.096140  121308 cache.go:56] Caching tarball of preloaded images
	I0819 11:29:25.096238  121308 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:29:25.096250  121308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:29:25.096572  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:29:25.096596  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json: {Name:mkb252db29952c96b64f97f7f38d69e55e2baf9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:25.096771  121308 start.go:360] acquireMachinesLock for ha-503856: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:25.096807  121308 start.go:364] duration metric: took 20.687µs to acquireMachinesLock for "ha-503856"
	I0819 11:29:25.096831  121308 start.go:93] Provisioning new machine with config: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:29:25.096907  121308 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 11:29:25.098381  121308 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:29:25.098537  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:29:25.098582  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:29:25.116025  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0819 11:29:25.116529  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:29:25.117139  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:29:25.117161  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:29:25.117560  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:29:25.117750  121308 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:29:25.117875  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:25.118004  121308 start.go:159] libmachine.API.Create for "ha-503856" (driver="kvm2")
	I0819 11:29:25.118032  121308 client.go:168] LocalClient.Create starting
	I0819 11:29:25.118060  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 11:29:25.118090  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:25.118104  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:25.118160  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 11:29:25.118180  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:25.118194  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:25.118209  121308 main.go:141] libmachine: Running pre-create checks...
	I0819 11:29:25.118216  121308 main.go:141] libmachine: (ha-503856) Calling .PreCreateCheck
	I0819 11:29:25.118509  121308 main.go:141] libmachine: (ha-503856) Calling .GetConfigRaw
	I0819 11:29:25.118863  121308 main.go:141] libmachine: Creating machine...
	I0819 11:29:25.118876  121308 main.go:141] libmachine: (ha-503856) Calling .Create
	I0819 11:29:25.119005  121308 main.go:141] libmachine: (ha-503856) Creating KVM machine...
	I0819 11:29:25.120328  121308 main.go:141] libmachine: (ha-503856) DBG | found existing default KVM network
	I0819 11:29:25.121251  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.121110  121331 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c00}
	I0819 11:29:25.121321  121308 main.go:141] libmachine: (ha-503856) DBG | created network xml: 
	I0819 11:29:25.121347  121308 main.go:141] libmachine: (ha-503856) DBG | <network>
	I0819 11:29:25.121363  121308 main.go:141] libmachine: (ha-503856) DBG |   <name>mk-ha-503856</name>
	I0819 11:29:25.121379  121308 main.go:141] libmachine: (ha-503856) DBG |   <dns enable='no'/>
	I0819 11:29:25.121395  121308 main.go:141] libmachine: (ha-503856) DBG |   
	I0819 11:29:25.121408  121308 main.go:141] libmachine: (ha-503856) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 11:29:25.121418  121308 main.go:141] libmachine: (ha-503856) DBG |     <dhcp>
	I0819 11:29:25.121426  121308 main.go:141] libmachine: (ha-503856) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 11:29:25.121434  121308 main.go:141] libmachine: (ha-503856) DBG |     </dhcp>
	I0819 11:29:25.121439  121308 main.go:141] libmachine: (ha-503856) DBG |   </ip>
	I0819 11:29:25.121444  121308 main.go:141] libmachine: (ha-503856) DBG |   
	I0819 11:29:25.121452  121308 main.go:141] libmachine: (ha-503856) DBG | </network>
	I0819 11:29:25.121476  121308 main.go:141] libmachine: (ha-503856) DBG | 
	I0819 11:29:25.126606  121308 main.go:141] libmachine: (ha-503856) DBG | trying to create private KVM network mk-ha-503856 192.168.39.0/24...
	I0819 11:29:25.194625  121308 main.go:141] libmachine: (ha-503856) DBG | private KVM network mk-ha-503856 192.168.39.0/24 created
	I0819 11:29:25.194705  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.194577  121331 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:29:25.194752  121308 main.go:141] libmachine: (ha-503856) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856 ...
	I0819 11:29:25.194774  121308 main.go:141] libmachine: (ha-503856) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:29:25.194792  121308 main.go:141] libmachine: (ha-503856) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:29:25.459397  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.459252  121331 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa...
	I0819 11:29:25.646269  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.646148  121331 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/ha-503856.rawdisk...
	I0819 11:29:25.646294  121308 main.go:141] libmachine: (ha-503856) DBG | Writing magic tar header
	I0819 11:29:25.646305  121308 main.go:141] libmachine: (ha-503856) DBG | Writing SSH key tar header
	I0819 11:29:25.646313  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:25.646267  121331 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856 ...
	I0819 11:29:25.646390  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856
	I0819 11:29:25.646415  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856 (perms=drwx------)
	I0819 11:29:25.646427  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 11:29:25.646438  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 11:29:25.646480  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 11:29:25.646512  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 11:29:25.646526  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:29:25.646538  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 11:29:25.646562  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 11:29:25.646582  121308 main.go:141] libmachine: (ha-503856) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 11:29:25.646589  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 11:29:25.646602  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home/jenkins
	I0819 11:29:25.646619  121308 main.go:141] libmachine: (ha-503856) DBG | Checking permissions on dir: /home
	I0819 11:29:25.646627  121308 main.go:141] libmachine: (ha-503856) Creating domain...
	I0819 11:29:25.646642  121308 main.go:141] libmachine: (ha-503856) DBG | Skipping /home - not owner
	I0819 11:29:25.647557  121308 main.go:141] libmachine: (ha-503856) define libvirt domain using xml: 
	I0819 11:29:25.647582  121308 main.go:141] libmachine: (ha-503856) <domain type='kvm'>
	I0819 11:29:25.647593  121308 main.go:141] libmachine: (ha-503856)   <name>ha-503856</name>
	I0819 11:29:25.647601  121308 main.go:141] libmachine: (ha-503856)   <memory unit='MiB'>2200</memory>
	I0819 11:29:25.647609  121308 main.go:141] libmachine: (ha-503856)   <vcpu>2</vcpu>
	I0819 11:29:25.647616  121308 main.go:141] libmachine: (ha-503856)   <features>
	I0819 11:29:25.647624  121308 main.go:141] libmachine: (ha-503856)     <acpi/>
	I0819 11:29:25.647631  121308 main.go:141] libmachine: (ha-503856)     <apic/>
	I0819 11:29:25.647639  121308 main.go:141] libmachine: (ha-503856)     <pae/>
	I0819 11:29:25.647650  121308 main.go:141] libmachine: (ha-503856)     
	I0819 11:29:25.647665  121308 main.go:141] libmachine: (ha-503856)   </features>
	I0819 11:29:25.647681  121308 main.go:141] libmachine: (ha-503856)   <cpu mode='host-passthrough'>
	I0819 11:29:25.647703  121308 main.go:141] libmachine: (ha-503856)   
	I0819 11:29:25.647732  121308 main.go:141] libmachine: (ha-503856)   </cpu>
	I0819 11:29:25.647742  121308 main.go:141] libmachine: (ha-503856)   <os>
	I0819 11:29:25.647753  121308 main.go:141] libmachine: (ha-503856)     <type>hvm</type>
	I0819 11:29:25.647763  121308 main.go:141] libmachine: (ha-503856)     <boot dev='cdrom'/>
	I0819 11:29:25.647775  121308 main.go:141] libmachine: (ha-503856)     <boot dev='hd'/>
	I0819 11:29:25.647798  121308 main.go:141] libmachine: (ha-503856)     <bootmenu enable='no'/>
	I0819 11:29:25.647813  121308 main.go:141] libmachine: (ha-503856)   </os>
	I0819 11:29:25.647827  121308 main.go:141] libmachine: (ha-503856)   <devices>
	I0819 11:29:25.647844  121308 main.go:141] libmachine: (ha-503856)     <disk type='file' device='cdrom'>
	I0819 11:29:25.647863  121308 main.go:141] libmachine: (ha-503856)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/boot2docker.iso'/>
	I0819 11:29:25.647874  121308 main.go:141] libmachine: (ha-503856)       <target dev='hdc' bus='scsi'/>
	I0819 11:29:25.647886  121308 main.go:141] libmachine: (ha-503856)       <readonly/>
	I0819 11:29:25.647896  121308 main.go:141] libmachine: (ha-503856)     </disk>
	I0819 11:29:25.647910  121308 main.go:141] libmachine: (ha-503856)     <disk type='file' device='disk'>
	I0819 11:29:25.647929  121308 main.go:141] libmachine: (ha-503856)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 11:29:25.647945  121308 main.go:141] libmachine: (ha-503856)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/ha-503856.rawdisk'/>
	I0819 11:29:25.647956  121308 main.go:141] libmachine: (ha-503856)       <target dev='hda' bus='virtio'/>
	I0819 11:29:25.647963  121308 main.go:141] libmachine: (ha-503856)     </disk>
	I0819 11:29:25.647974  121308 main.go:141] libmachine: (ha-503856)     <interface type='network'>
	I0819 11:29:25.647986  121308 main.go:141] libmachine: (ha-503856)       <source network='mk-ha-503856'/>
	I0819 11:29:25.648000  121308 main.go:141] libmachine: (ha-503856)       <model type='virtio'/>
	I0819 11:29:25.648012  121308 main.go:141] libmachine: (ha-503856)     </interface>
	I0819 11:29:25.648023  121308 main.go:141] libmachine: (ha-503856)     <interface type='network'>
	I0819 11:29:25.648035  121308 main.go:141] libmachine: (ha-503856)       <source network='default'/>
	I0819 11:29:25.648046  121308 main.go:141] libmachine: (ha-503856)       <model type='virtio'/>
	I0819 11:29:25.648054  121308 main.go:141] libmachine: (ha-503856)     </interface>
	I0819 11:29:25.648065  121308 main.go:141] libmachine: (ha-503856)     <serial type='pty'>
	I0819 11:29:25.648074  121308 main.go:141] libmachine: (ha-503856)       <target port='0'/>
	I0819 11:29:25.648086  121308 main.go:141] libmachine: (ha-503856)     </serial>
	I0819 11:29:25.648096  121308 main.go:141] libmachine: (ha-503856)     <console type='pty'>
	I0819 11:29:25.648105  121308 main.go:141] libmachine: (ha-503856)       <target type='serial' port='0'/>
	I0819 11:29:25.648125  121308 main.go:141] libmachine: (ha-503856)     </console>
	I0819 11:29:25.648136  121308 main.go:141] libmachine: (ha-503856)     <rng model='virtio'>
	I0819 11:29:25.648153  121308 main.go:141] libmachine: (ha-503856)       <backend model='random'>/dev/random</backend>
	I0819 11:29:25.648164  121308 main.go:141] libmachine: (ha-503856)     </rng>
	I0819 11:29:25.648173  121308 main.go:141] libmachine: (ha-503856)     
	I0819 11:29:25.648180  121308 main.go:141] libmachine: (ha-503856)     
	I0819 11:29:25.648189  121308 main.go:141] libmachine: (ha-503856)   </devices>
	I0819 11:29:25.648200  121308 main.go:141] libmachine: (ha-503856) </domain>
	I0819 11:29:25.648210  121308 main.go:141] libmachine: (ha-503856) 
	I0819 11:29:25.652462  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:72:77:58 in network default
	I0819 11:29:25.653112  121308 main.go:141] libmachine: (ha-503856) Ensuring networks are active...
	I0819 11:29:25.653131  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:25.653799  121308 main.go:141] libmachine: (ha-503856) Ensuring network default is active
	I0819 11:29:25.654102  121308 main.go:141] libmachine: (ha-503856) Ensuring network mk-ha-503856 is active
	I0819 11:29:25.654521  121308 main.go:141] libmachine: (ha-503856) Getting domain xml...
	I0819 11:29:25.655166  121308 main.go:141] libmachine: (ha-503856) Creating domain...
	I0819 11:29:26.869502  121308 main.go:141] libmachine: (ha-503856) Waiting to get IP...
	I0819 11:29:26.870259  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:26.870614  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:26.870637  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:26.870590  121331 retry.go:31] will retry after 296.406567ms: waiting for machine to come up
	I0819 11:29:27.169294  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:27.169757  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:27.169782  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:27.169723  121331 retry.go:31] will retry after 276.081331ms: waiting for machine to come up
	I0819 11:29:27.447191  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:27.447642  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:27.447667  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:27.447604  121331 retry.go:31] will retry after 385.241682ms: waiting for machine to come up
	I0819 11:29:27.834217  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:27.834627  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:27.834658  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:27.834604  121331 retry.go:31] will retry after 586.232406ms: waiting for machine to come up
	I0819 11:29:28.422499  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:28.422826  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:28.422875  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:28.422799  121331 retry.go:31] will retry after 517.887819ms: waiting for machine to come up
	I0819 11:29:28.942704  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:28.943161  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:28.943192  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:28.943117  121331 retry.go:31] will retry after 638.927317ms: waiting for machine to come up
	I0819 11:29:29.584039  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:29.584404  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:29.584448  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:29.584361  121331 retry.go:31] will retry after 1.031172042s: waiting for machine to come up
	I0819 11:29:30.617196  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:30.617579  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:30.617604  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:30.617527  121331 retry.go:31] will retry after 1.482642322s: waiting for machine to come up
	I0819 11:29:32.102169  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:32.102589  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:32.102617  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:32.102540  121331 retry.go:31] will retry after 1.291948881s: waiting for machine to come up
	I0819 11:29:33.396112  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:33.396572  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:33.396603  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:33.396515  121331 retry.go:31] will retry after 1.881043413s: waiting for machine to come up
	I0819 11:29:35.279181  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:35.279630  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:35.279663  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:35.279612  121331 retry.go:31] will retry after 1.897450306s: waiting for machine to come up
	I0819 11:29:37.179767  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:37.180214  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:37.180241  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:37.180195  121331 retry.go:31] will retry after 3.322751014s: waiting for machine to come up
	I0819 11:29:40.504395  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:40.504881  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find current IP address of domain ha-503856 in network mk-ha-503856
	I0819 11:29:40.504900  121308 main.go:141] libmachine: (ha-503856) DBG | I0819 11:29:40.504827  121331 retry.go:31] will retry after 3.885433697s: waiting for machine to come up
	I0819 11:29:44.395167  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.395631  121308 main.go:141] libmachine: (ha-503856) Found IP for machine: 192.168.39.102
	I0819 11:29:44.395647  121308 main.go:141] libmachine: (ha-503856) Reserving static IP address...
	I0819 11:29:44.395662  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has current primary IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.396011  121308 main.go:141] libmachine: (ha-503856) DBG | unable to find host DHCP lease matching {name: "ha-503856", mac: "52:54:00:d1:ab:80", ip: "192.168.39.102"} in network mk-ha-503856
	I0819 11:29:44.475072  121308 main.go:141] libmachine: (ha-503856) DBG | Getting to WaitForSSH function...
	I0819 11:29:44.475106  121308 main.go:141] libmachine: (ha-503856) Reserved static IP address: 192.168.39.102
	I0819 11:29:44.475121  121308 main.go:141] libmachine: (ha-503856) Waiting for SSH to be available...
	I0819 11:29:44.477916  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.478299  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.478335  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.478478  121308 main.go:141] libmachine: (ha-503856) DBG | Using SSH client type: external
	I0819 11:29:44.478511  121308 main.go:141] libmachine: (ha-503856) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa (-rw-------)
	I0819 11:29:44.478544  121308 main.go:141] libmachine: (ha-503856) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 11:29:44.478559  121308 main.go:141] libmachine: (ha-503856) DBG | About to run SSH command:
	I0819 11:29:44.478572  121308 main.go:141] libmachine: (ha-503856) DBG | exit 0
	I0819 11:29:44.603965  121308 main.go:141] libmachine: (ha-503856) DBG | SSH cmd err, output: <nil>: 
	I0819 11:29:44.604267  121308 main.go:141] libmachine: (ha-503856) KVM machine creation complete!
	I0819 11:29:44.604611  121308 main.go:141] libmachine: (ha-503856) Calling .GetConfigRaw
	I0819 11:29:44.605234  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:44.605435  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:44.605607  121308 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 11:29:44.605622  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:29:44.606987  121308 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 11:29:44.607004  121308 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 11:29:44.607012  121308 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 11:29:44.607021  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:44.609226  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.609590  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.609627  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.609777  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:44.610001  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.610222  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.610353  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:44.610511  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:44.610722  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:44.610736  121308 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 11:29:44.715112  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:29:44.715132  121308 main.go:141] libmachine: Detecting the provisioner...
	I0819 11:29:44.715140  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:44.717839  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.718198  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.718226  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.718384  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:44.718595  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.718748  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.718874  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:44.719026  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:44.719188  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:44.719199  121308 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 11:29:44.824171  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 11:29:44.824277  121308 main.go:141] libmachine: found compatible host: buildroot
	I0819 11:29:44.824292  121308 main.go:141] libmachine: Provisioning with buildroot...
	I0819 11:29:44.824304  121308 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:29:44.824628  121308 buildroot.go:166] provisioning hostname "ha-503856"
	I0819 11:29:44.824654  121308 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:29:44.824823  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:44.827275  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.827565  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.827590  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.827716  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:44.827928  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.828082  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.828197  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:44.828342  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:44.828548  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:44.828561  121308 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-503856 && echo "ha-503856" | sudo tee /etc/hostname
	I0819 11:29:44.945015  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856
	
	I0819 11:29:44.945054  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:44.947703  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.948087  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:44.948121  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:44.948304  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:44.948543  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.948726  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:44.948881  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:44.949022  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:44.949178  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:44.949192  121308 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-503856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-503856/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-503856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:29:45.063956  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:29:45.063986  121308 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 11:29:45.064033  121308 buildroot.go:174] setting up certificates
	I0819 11:29:45.064047  121308 provision.go:84] configureAuth start
	I0819 11:29:45.064061  121308 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:29:45.064388  121308 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:29:45.066803  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.067097  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.067128  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.067229  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.069505  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.069809  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.069836  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.069965  121308 provision.go:143] copyHostCerts
	I0819 11:29:45.069998  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:29:45.070042  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 11:29:45.070060  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:29:45.070127  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 11:29:45.070217  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:29:45.070238  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 11:29:45.070245  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:29:45.070268  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 11:29:45.070336  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:29:45.070360  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 11:29:45.070368  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:29:45.070401  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 11:29:45.070469  121308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.ha-503856 san=[127.0.0.1 192.168.39.102 ha-503856 localhost minikube]
	I0819 11:29:45.164209  121308 provision.go:177] copyRemoteCerts
	I0819 11:29:45.164278  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:29:45.164310  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.166851  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.167327  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.167361  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.167489  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.167715  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.167905  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.168078  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:29:45.249878  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 11:29:45.249969  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 11:29:45.274017  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 11:29:45.274087  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:29:45.297588  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 11:29:45.297659  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:29:45.321478  121308 provision.go:87] duration metric: took 257.404108ms to configureAuth
	I0819 11:29:45.321508  121308 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:29:45.321681  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:29:45.321760  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.324425  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.324811  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.324853  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.325040  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.325250  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.325400  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.325526  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.325666  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:45.325846  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:45.325871  121308 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:29:45.588104  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:29:45.588138  121308 main.go:141] libmachine: Checking connection to Docker...
	I0819 11:29:45.588149  121308 main.go:141] libmachine: (ha-503856) Calling .GetURL
	I0819 11:29:45.589426  121308 main.go:141] libmachine: (ha-503856) DBG | Using libvirt version 6000000
	I0819 11:29:45.591760  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.592252  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.592274  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.592501  121308 main.go:141] libmachine: Docker is up and running!
	I0819 11:29:45.592522  121308 main.go:141] libmachine: Reticulating splines...
	I0819 11:29:45.592529  121308 client.go:171] duration metric: took 20.474488342s to LocalClient.Create
	I0819 11:29:45.592552  121308 start.go:167] duration metric: took 20.474549128s to libmachine.API.Create "ha-503856"
	I0819 11:29:45.592563  121308 start.go:293] postStartSetup for "ha-503856" (driver="kvm2")
	I0819 11:29:45.592574  121308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:29:45.592590  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.592822  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:29:45.592847  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.594970  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.595304  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.595330  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.595508  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.595704  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.595878  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.596035  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:29:45.677728  121308 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:29:45.681821  121308 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:29:45.681848  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 11:29:45.681914  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 11:29:45.681986  121308 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 11:29:45.681996  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 11:29:45.682085  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:29:45.691177  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:29:45.714524  121308 start.go:296] duration metric: took 121.945037ms for postStartSetup
	I0819 11:29:45.714586  121308 main.go:141] libmachine: (ha-503856) Calling .GetConfigRaw
	I0819 11:29:45.715202  121308 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:29:45.717648  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.717977  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.718016  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.718245  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:29:45.718453  121308 start.go:128] duration metric: took 20.621534419s to createHost
	I0819 11:29:45.718475  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.720739  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.721090  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.721117  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.721288  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.721487  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.721658  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.721812  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.721962  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:29:45.722164  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:29:45.722176  121308 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:29:45.828348  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724066985.800962210
	
	I0819 11:29:45.828376  121308 fix.go:216] guest clock: 1724066985.800962210
	I0819 11:29:45.828387  121308 fix.go:229] Guest: 2024-08-19 11:29:45.80096221 +0000 UTC Remote: 2024-08-19 11:29:45.718464633 +0000 UTC m=+20.731826657 (delta=82.497577ms)
	I0819 11:29:45.828409  121308 fix.go:200] guest clock delta is within tolerance: 82.497577ms
	I0819 11:29:45.828414  121308 start.go:83] releasing machines lock for "ha-503856", held for 20.731595853s
	I0819 11:29:45.828432  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.828742  121308 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:29:45.831183  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.831496  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.831533  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.831648  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.832259  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.832455  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:29:45.832554  121308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:29:45.832609  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.832661  121308 ssh_runner.go:195] Run: cat /version.json
	I0819 11:29:45.832687  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:29:45.835004  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.835076  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.835421  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.835454  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:45.835475  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.835490  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:45.835628  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.835663  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:29:45.835828  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.835836  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:29:45.835979  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.835987  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:29:45.836117  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:29:45.836116  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:29:45.912640  121308 ssh_runner.go:195] Run: systemctl --version
	I0819 11:29:45.933374  121308 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:29:46.087190  121308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:29:46.093838  121308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:29:46.093904  121308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:29:46.109026  121308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:29:46.109055  121308 start.go:495] detecting cgroup driver to use...
	I0819 11:29:46.109129  121308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:29:46.124862  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:29:46.138847  121308 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:29:46.138912  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:29:46.153299  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:29:46.167932  121308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:29:46.288292  121308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:29:46.458556  121308 docker.go:233] disabling docker service ...
	I0819 11:29:46.458652  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:29:46.473035  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:29:46.486416  121308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:29:46.614865  121308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:29:46.748884  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:29:46.762268  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:29:46.780298  121308 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:29:46.780378  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.790974  121308 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:29:46.791039  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.801482  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.811862  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.822358  121308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:29:46.832997  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.843401  121308 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.860306  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:29:46.870896  121308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:29:46.880199  121308 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 11:29:46.880269  121308 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 11:29:46.893533  121308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:29:46.903338  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:29:47.030300  121308 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:29:47.163347  121308 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:29:47.163438  121308 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:29:47.168142  121308 start.go:563] Will wait 60s for crictl version
	I0819 11:29:47.168210  121308 ssh_runner.go:195] Run: which crictl
	I0819 11:29:47.171837  121308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:29:47.210346  121308 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:29:47.210433  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:29:47.238323  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:29:47.267905  121308 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:29:47.269300  121308 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:29:47.272144  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:47.272560  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:29:47.272587  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:29:47.272809  121308 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:29:47.276897  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:29:47.289872  121308 kubeadm.go:883] updating cluster {Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:29:47.289997  121308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:29:47.290053  121308 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:29:47.321530  121308 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 11:29:47.321602  121308 ssh_runner.go:195] Run: which lz4
	I0819 11:29:47.325560  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 11:29:47.325677  121308 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:29:47.329750  121308 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:29:47.329793  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 11:29:48.565735  121308 crio.go:462] duration metric: took 1.240087569s to copy over tarball
	I0819 11:29:48.565816  121308 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:29:50.656031  121308 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.090179988s)
	I0819 11:29:50.656067  121308 crio.go:469] duration metric: took 2.09030002s to extract the tarball
	I0819 11:29:50.656077  121308 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:29:50.694696  121308 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:29:50.735948  121308 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:29:50.735975  121308 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:29:50.735983  121308 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.0 crio true true} ...
	I0819 11:29:50.736128  121308 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-503856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:29:50.736196  121308 ssh_runner.go:195] Run: crio config
	I0819 11:29:50.785870  121308 cni.go:84] Creating CNI manager for ""
	I0819 11:29:50.785890  121308 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:29:50.785898  121308 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:29:50.785919  121308 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-503856 NodeName:ha-503856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:29:50.786046  121308 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-503856"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:29:50.786071  121308 kube-vip.go:115] generating kube-vip config ...
	I0819 11:29:50.786115  121308 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 11:29:50.803283  121308 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 11:29:50.803405  121308 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 11:29:50.803466  121308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:29:50.813282  121308 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:29:50.813350  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 11:29:50.822899  121308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 11:29:50.839252  121308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:29:50.855440  121308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 11:29:50.871819  121308 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 11:29:50.887822  121308 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 11:29:50.891655  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:29:50.903950  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:29:51.035237  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:29:51.051783  121308 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856 for IP: 192.168.39.102
	I0819 11:29:51.051809  121308 certs.go:194] generating shared ca certs ...
	I0819 11:29:51.051825  121308 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.051999  121308 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 11:29:51.052058  121308 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 11:29:51.052071  121308 certs.go:256] generating profile certs ...
	I0819 11:29:51.052162  121308 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key
	I0819 11:29:51.052194  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt with IP's: []
	I0819 11:29:51.270504  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt ...
	I0819 11:29:51.270539  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt: {Name:mk9a88274d45fc56fb7a425e3de1e21485ead09f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.270741  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key ...
	I0819 11:29:51.270755  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key: {Name:mk60b21abe048b27494c96025e666ab2288eae45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.270860  121308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.333fb727
	I0819 11:29:51.270876  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.333fb727 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I0819 11:29:51.494646  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.333fb727 ...
	I0819 11:29:51.494678  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.333fb727: {Name:mk41b07f16ec35f77ef14672e9516d40d7f2b12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.494863  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.333fb727 ...
	I0819 11:29:51.494879  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.333fb727: {Name:mk81aa2b88e45024ea1afdd52c3744c0cc1a2bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.494973  121308 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.333fb727 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt
	I0819 11:29:51.495051  121308 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.333fb727 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key
	I0819 11:29:51.495106  121308 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key
	I0819 11:29:51.495120  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt with IP's: []
	I0819 11:29:51.636785  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt ...
	I0819 11:29:51.636814  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt: {Name:mk26e8fa9747f87d776243cca11643b6f4dc6224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.636995  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key ...
	I0819 11:29:51.637008  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key: {Name:mkc1e8c4b0167d5c4219c6cd16298094535b3d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:51.637102  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 11:29:51.637122  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 11:29:51.637133  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 11:29:51.637146  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 11:29:51.637158  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 11:29:51.637170  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 11:29:51.637181  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 11:29:51.637192  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 11:29:51.637253  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 11:29:51.637290  121308 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 11:29:51.637299  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:29:51.637319  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:29:51.637390  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:29:51.637417  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 11:29:51.637467  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:29:51.637496  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:29:51.637511  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 11:29:51.637523  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 11:29:51.638063  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:29:51.663063  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:29:51.686885  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:29:51.711523  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:29:51.735720  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 11:29:51.763344  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 11:29:51.789570  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:29:51.837925  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:29:51.862448  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:29:51.885558  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 11:29:51.908522  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 11:29:51.931909  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:29:51.948273  121308 ssh_runner.go:195] Run: openssl version
	I0819 11:29:51.953992  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 11:29:51.965327  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 11:29:51.969714  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 11:29:51.969791  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 11:29:51.975410  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:29:51.986334  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:29:51.997623  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:29:52.002081  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:29:52.002168  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:29:52.007737  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:29:52.018637  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 11:29:52.029514  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 11:29:52.033753  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 11:29:52.033837  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 11:29:52.039589  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 11:29:52.050782  121308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:29:52.054807  121308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:29:52.054875  121308 kubeadm.go:392] StartCluster: {Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:52.054967  121308 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 11:29:52.055026  121308 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:29:52.090316  121308 cri.go:89] found id: ""
	I0819 11:29:52.090394  121308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:29:52.100701  121308 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:29:52.111233  121308 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:29:52.122410  121308 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:29:52.122432  121308 kubeadm.go:157] found existing configuration files:
	
	I0819 11:29:52.122490  121308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 11:29:52.131490  121308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:29:52.131553  121308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:29:52.141136  121308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 11:29:52.150138  121308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:29:52.150203  121308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:29:52.160566  121308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 11:29:52.170302  121308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:29:52.170370  121308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:29:52.180199  121308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 11:29:52.190011  121308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:29:52.190088  121308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:29:52.199926  121308 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:29:52.300321  121308 kubeadm.go:310] W0819 11:29:52.281176     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:29:52.301119  121308 kubeadm.go:310] W0819 11:29:52.282038     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:29:52.399857  121308 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:30:06.546603  121308 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 11:30:06.546686  121308 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:30:06.546781  121308 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:30:06.546931  121308 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:30:06.547047  121308 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 11:30:06.547144  121308 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:30:06.548546  121308 out.go:235]   - Generating certificates and keys ...
	I0819 11:30:06.548623  121308 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:30:06.548674  121308 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:30:06.548781  121308 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 11:30:06.548866  121308 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 11:30:06.548919  121308 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 11:30:06.548962  121308 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 11:30:06.549014  121308 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 11:30:06.549132  121308 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-503856 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0819 11:30:06.549204  121308 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 11:30:06.549363  121308 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-503856 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0819 11:30:06.549486  121308 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 11:30:06.549562  121308 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 11:30:06.549609  121308 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 11:30:06.549662  121308 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:30:06.549717  121308 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:30:06.549769  121308 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 11:30:06.549847  121308 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:30:06.549905  121308 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:30:06.549951  121308 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:30:06.550025  121308 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:30:06.550089  121308 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:30:06.552126  121308 out.go:235]   - Booting up control plane ...
	I0819 11:30:06.552206  121308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:30:06.552270  121308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:30:06.552331  121308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:30:06.552451  121308 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:30:06.552556  121308 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:30:06.552624  121308 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:30:06.552760  121308 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 11:30:06.552897  121308 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 11:30:06.552956  121308 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.486946ms
	I0819 11:30:06.553055  121308 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 11:30:06.553133  121308 kubeadm.go:310] [api-check] The API server is healthy after 8.93290721s
	I0819 11:30:06.553260  121308 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:30:06.553380  121308 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:30:06.553455  121308 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:30:06.553685  121308 kubeadm.go:310] [mark-control-plane] Marking the node ha-503856 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:30:06.553771  121308 kubeadm.go:310] [bootstrap-token] Using token: yabek6.lq4ketpzskifobiz
	I0819 11:30:06.554982  121308 out.go:235]   - Configuring RBAC rules ...
	I0819 11:30:06.555100  121308 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:30:06.555202  121308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:30:06.555356  121308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:30:06.555516  121308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:30:06.555639  121308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:30:06.555785  121308 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:30:06.555919  121308 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:30:06.555981  121308 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:30:06.556039  121308 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:30:06.556048  121308 kubeadm.go:310] 
	I0819 11:30:06.556097  121308 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:30:06.556103  121308 kubeadm.go:310] 
	I0819 11:30:06.556172  121308 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:30:06.556178  121308 kubeadm.go:310] 
	I0819 11:30:06.556199  121308 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:30:06.556272  121308 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:30:06.556350  121308 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:30:06.556358  121308 kubeadm.go:310] 
	I0819 11:30:06.556429  121308 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:30:06.556438  121308 kubeadm.go:310] 
	I0819 11:30:06.556517  121308 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:30:06.556527  121308 kubeadm.go:310] 
	I0819 11:30:06.556607  121308 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:30:06.556705  121308 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:30:06.556782  121308 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:30:06.556788  121308 kubeadm.go:310] 
	I0819 11:30:06.556861  121308 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:30:06.556929  121308 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:30:06.556936  121308 kubeadm.go:310] 
	I0819 11:30:06.557008  121308 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yabek6.lq4ketpzskifobiz \
	I0819 11:30:06.557103  121308 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 \
	I0819 11:30:06.557122  121308 kubeadm.go:310] 	--control-plane 
	I0819 11:30:06.557128  121308 kubeadm.go:310] 
	I0819 11:30:06.557237  121308 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:30:06.557244  121308 kubeadm.go:310] 
	I0819 11:30:06.557314  121308 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yabek6.lq4ketpzskifobiz \
	I0819 11:30:06.557413  121308 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 
	I0819 11:30:06.557424  121308 cni.go:84] Creating CNI manager for ""
	I0819 11:30:06.557429  121308 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:30:06.558979  121308 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 11:30:06.560153  121308 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 11:30:06.565679  121308 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 11:30:06.565705  121308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 11:30:06.585137  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 11:30:06.909534  121308 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:30:06.909607  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:30:06.909639  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-503856 minikube.k8s.io/updated_at=2024_08_19T11_30_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=ha-503856 minikube.k8s.io/primary=true
	I0819 11:30:07.119368  121308 ops.go:34] apiserver oom_adj: -16
	I0819 11:30:07.119519  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:30:07.248005  121308 kubeadm.go:1113] duration metric: took 338.457988ms to wait for elevateKubeSystemPrivileges
	I0819 11:30:07.248052  121308 kubeadm.go:394] duration metric: took 15.193184523s to StartCluster
	I0819 11:30:07.248075  121308 settings.go:142] acquiring lock: {Name:mk5d5753fc545a0b5fdfa44a1e5cbc5d198d9dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:07.248156  121308 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:30:07.248847  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/kubeconfig: {Name:mk73914d2bd0db664ade6c952753a7dd30404784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:07.249064  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 11:30:07.249064  121308 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:30:07.249089  121308 start.go:241] waiting for startup goroutines ...
	I0819 11:30:07.249101  121308 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:30:07.249196  121308 addons.go:69] Setting storage-provisioner=true in profile "ha-503856"
	I0819 11:30:07.249212  121308 addons.go:69] Setting default-storageclass=true in profile "ha-503856"
	I0819 11:30:07.249232  121308 addons.go:234] Setting addon storage-provisioner=true in "ha-503856"
	I0819 11:30:07.249250  121308 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-503856"
	I0819 11:30:07.249251  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:07.249274  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:30:07.249694  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.249714  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.249737  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.249761  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.265767  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I0819 11:30:07.265832  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I0819 11:30:07.266265  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.266314  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.266808  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.266825  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.266977  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.267000  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.267194  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.267342  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.267507  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:30:07.267745  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.267776  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.269663  121308 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:30:07.269905  121308 kapi.go:59] client config for ha-503856: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt", KeyFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key", CAFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:30:07.270382  121308 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 11:30:07.270638  121308 addons.go:234] Setting addon default-storageclass=true in "ha-503856"
	I0819 11:30:07.270672  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:30:07.270952  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.270986  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.283689  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0819 11:30:07.284162  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.284752  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.284783  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.285161  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.285407  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:30:07.286517  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0819 11:30:07.286908  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.287185  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:30:07.287409  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.287431  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.287935  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.288407  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:07.288433  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:07.289086  121308 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:30:07.290474  121308 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:30:07.290499  121308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:30:07.290522  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:30:07.293413  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:07.293863  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:30:07.293892  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:07.294038  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:30:07.294200  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:30:07.294339  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:30:07.294448  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:30:07.304382  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I0819 11:30:07.304936  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:07.305477  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:07.305504  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:07.305822  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:07.306032  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:30:07.307809  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:30:07.308080  121308 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:30:07.308098  121308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:30:07.308119  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:30:07.311908  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:30:07.311986  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:07.312030  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:30:07.312069  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:07.312246  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:30:07.312967  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:30:07.313160  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:30:07.386606  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 11:30:07.453696  121308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:30:07.482457  121308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:30:07.710263  121308 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 11:30:08.060035  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.060061  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.060069  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.060095  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.060357  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.060380  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.060391  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.060400  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.060434  121308 main.go:141] libmachine: (ha-503856) DBG | Closing plugin on server side
	I0819 11:30:08.060437  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.060457  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.060466  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.060474  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.060645  121308 main.go:141] libmachine: (ha-503856) DBG | Closing plugin on server side
	I0819 11:30:08.060671  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.060677  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.060776  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.060789  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.060846  121308 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:30:08.060867  121308 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:30:08.060964  121308 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 11:30:08.060975  121308 round_trippers.go:469] Request Headers:
	I0819 11:30:08.060985  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:30:08.060993  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:30:08.083458  121308 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0819 11:30:08.084702  121308 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 11:30:08.084722  121308 round_trippers.go:469] Request Headers:
	I0819 11:30:08.084732  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:30:08.084738  121308 round_trippers.go:473]     Content-Type: application/json
	I0819 11:30:08.084744  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:30:08.100808  121308 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0819 11:30:08.101030  121308 main.go:141] libmachine: Making call to close driver server
	I0819 11:30:08.101055  121308 main.go:141] libmachine: (ha-503856) Calling .Close
	I0819 11:30:08.101425  121308 main.go:141] libmachine: Successfully made call to close driver server
	I0819 11:30:08.101485  121308 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 11:30:08.101528  121308 main.go:141] libmachine: (ha-503856) DBG | Closing plugin on server side
	I0819 11:30:08.103230  121308 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 11:30:08.104327  121308 addons.go:510] duration metric: took 855.225246ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 11:30:08.104372  121308 start.go:246] waiting for cluster config update ...
	I0819 11:30:08.104384  121308 start.go:255] writing updated cluster config ...
	I0819 11:30:08.105911  121308 out.go:201] 
	I0819 11:30:08.107390  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:08.107480  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:30:08.109052  121308 out.go:177] * Starting "ha-503856-m02" control-plane node in "ha-503856" cluster
	I0819 11:30:08.110115  121308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:30:08.110150  121308 cache.go:56] Caching tarball of preloaded images
	I0819 11:30:08.110265  121308 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:30:08.110282  121308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:30:08.110379  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:30:08.110595  121308 start.go:360] acquireMachinesLock for ha-503856-m02: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:08.110658  121308 start.go:364] duration metric: took 39.83µs to acquireMachinesLock for "ha-503856-m02"
	I0819 11:30:08.110683  121308 start.go:93] Provisioning new machine with config: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:30:08.110763  121308 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 11:30:08.112395  121308 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:08.112515  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:08.112542  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:08.128109  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0819 11:30:08.128643  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:08.129206  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:08.129228  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:08.129554  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:08.129809  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetMachineName
	I0819 11:30:08.130006  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:08.130207  121308 start.go:159] libmachine.API.Create for "ha-503856" (driver="kvm2")
	I0819 11:30:08.130232  121308 client.go:168] LocalClient.Create starting
	I0819 11:30:08.130260  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 11:30:08.130294  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:08.130310  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:08.130362  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 11:30:08.130383  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:08.130393  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:08.130409  121308 main.go:141] libmachine: Running pre-create checks...
	I0819 11:30:08.130417  121308 main.go:141] libmachine: (ha-503856-m02) Calling .PreCreateCheck
	I0819 11:30:08.130604  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetConfigRaw
	I0819 11:30:08.131002  121308 main.go:141] libmachine: Creating machine...
	I0819 11:30:08.131014  121308 main.go:141] libmachine: (ha-503856-m02) Calling .Create
	I0819 11:30:08.131163  121308 main.go:141] libmachine: (ha-503856-m02) Creating KVM machine...
	I0819 11:30:08.132517  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found existing default KVM network
	I0819 11:30:08.132683  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found existing private KVM network mk-ha-503856
	I0819 11:30:08.132816  121308 main.go:141] libmachine: (ha-503856-m02) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02 ...
	I0819 11:30:08.132842  121308 main.go:141] libmachine: (ha-503856-m02) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:30:08.132895  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:08.132790  121667 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:30:08.132996  121308 main.go:141] libmachine: (ha-503856-m02) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:30:08.389790  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:08.389614  121667 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa...
	I0819 11:30:08.583984  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:08.583797  121667 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/ha-503856-m02.rawdisk...
	I0819 11:30:08.584027  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Writing magic tar header
	I0819 11:30:08.584042  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Writing SSH key tar header
	I0819 11:30:08.584055  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:08.583938  121667 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02 ...
	I0819 11:30:08.584071  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02
	I0819 11:30:08.584090  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02 (perms=drwx------)
	I0819 11:30:08.584104  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 11:30:08.584116  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:30:08.584123  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 11:30:08.584134  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 11:30:08.584147  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 11:30:08.584158  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Checking permissions on dir: /home
	I0819 11:30:08.584167  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Skipping /home - not owner
	I0819 11:30:08.584179  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 11:30:08.584216  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 11:30:08.584237  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 11:30:08.584246  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 11:30:08.584257  121308 main.go:141] libmachine: (ha-503856-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 11:30:08.584291  121308 main.go:141] libmachine: (ha-503856-m02) Creating domain...
	I0819 11:30:08.585250  121308 main.go:141] libmachine: (ha-503856-m02) define libvirt domain using xml: 
	I0819 11:30:08.585272  121308 main.go:141] libmachine: (ha-503856-m02) <domain type='kvm'>
	I0819 11:30:08.585282  121308 main.go:141] libmachine: (ha-503856-m02)   <name>ha-503856-m02</name>
	I0819 11:30:08.585294  121308 main.go:141] libmachine: (ha-503856-m02)   <memory unit='MiB'>2200</memory>
	I0819 11:30:08.585304  121308 main.go:141] libmachine: (ha-503856-m02)   <vcpu>2</vcpu>
	I0819 11:30:08.585313  121308 main.go:141] libmachine: (ha-503856-m02)   <features>
	I0819 11:30:08.585324  121308 main.go:141] libmachine: (ha-503856-m02)     <acpi/>
	I0819 11:30:08.585334  121308 main.go:141] libmachine: (ha-503856-m02)     <apic/>
	I0819 11:30:08.585342  121308 main.go:141] libmachine: (ha-503856-m02)     <pae/>
	I0819 11:30:08.585355  121308 main.go:141] libmachine: (ha-503856-m02)     
	I0819 11:30:08.585366  121308 main.go:141] libmachine: (ha-503856-m02)   </features>
	I0819 11:30:08.585376  121308 main.go:141] libmachine: (ha-503856-m02)   <cpu mode='host-passthrough'>
	I0819 11:30:08.585384  121308 main.go:141] libmachine: (ha-503856-m02)   
	I0819 11:30:08.585391  121308 main.go:141] libmachine: (ha-503856-m02)   </cpu>
	I0819 11:30:08.585402  121308 main.go:141] libmachine: (ha-503856-m02)   <os>
	I0819 11:30:08.585413  121308 main.go:141] libmachine: (ha-503856-m02)     <type>hvm</type>
	I0819 11:30:08.585435  121308 main.go:141] libmachine: (ha-503856-m02)     <boot dev='cdrom'/>
	I0819 11:30:08.585452  121308 main.go:141] libmachine: (ha-503856-m02)     <boot dev='hd'/>
	I0819 11:30:08.585463  121308 main.go:141] libmachine: (ha-503856-m02)     <bootmenu enable='no'/>
	I0819 11:30:08.585468  121308 main.go:141] libmachine: (ha-503856-m02)   </os>
	I0819 11:30:08.585476  121308 main.go:141] libmachine: (ha-503856-m02)   <devices>
	I0819 11:30:08.585481  121308 main.go:141] libmachine: (ha-503856-m02)     <disk type='file' device='cdrom'>
	I0819 11:30:08.585491  121308 main.go:141] libmachine: (ha-503856-m02)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/boot2docker.iso'/>
	I0819 11:30:08.585497  121308 main.go:141] libmachine: (ha-503856-m02)       <target dev='hdc' bus='scsi'/>
	I0819 11:30:08.585506  121308 main.go:141] libmachine: (ha-503856-m02)       <readonly/>
	I0819 11:30:08.585510  121308 main.go:141] libmachine: (ha-503856-m02)     </disk>
	I0819 11:30:08.585516  121308 main.go:141] libmachine: (ha-503856-m02)     <disk type='file' device='disk'>
	I0819 11:30:08.585532  121308 main.go:141] libmachine: (ha-503856-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 11:30:08.585544  121308 main.go:141] libmachine: (ha-503856-m02)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/ha-503856-m02.rawdisk'/>
	I0819 11:30:08.585552  121308 main.go:141] libmachine: (ha-503856-m02)       <target dev='hda' bus='virtio'/>
	I0819 11:30:08.585558  121308 main.go:141] libmachine: (ha-503856-m02)     </disk>
	I0819 11:30:08.585565  121308 main.go:141] libmachine: (ha-503856-m02)     <interface type='network'>
	I0819 11:30:08.585571  121308 main.go:141] libmachine: (ha-503856-m02)       <source network='mk-ha-503856'/>
	I0819 11:30:08.585578  121308 main.go:141] libmachine: (ha-503856-m02)       <model type='virtio'/>
	I0819 11:30:08.585584  121308 main.go:141] libmachine: (ha-503856-m02)     </interface>
	I0819 11:30:08.585590  121308 main.go:141] libmachine: (ha-503856-m02)     <interface type='network'>
	I0819 11:30:08.585615  121308 main.go:141] libmachine: (ha-503856-m02)       <source network='default'/>
	I0819 11:30:08.585637  121308 main.go:141] libmachine: (ha-503856-m02)       <model type='virtio'/>
	I0819 11:30:08.585649  121308 main.go:141] libmachine: (ha-503856-m02)     </interface>
	I0819 11:30:08.585659  121308 main.go:141] libmachine: (ha-503856-m02)     <serial type='pty'>
	I0819 11:30:08.585666  121308 main.go:141] libmachine: (ha-503856-m02)       <target port='0'/>
	I0819 11:30:08.585671  121308 main.go:141] libmachine: (ha-503856-m02)     </serial>
	I0819 11:30:08.585679  121308 main.go:141] libmachine: (ha-503856-m02)     <console type='pty'>
	I0819 11:30:08.585692  121308 main.go:141] libmachine: (ha-503856-m02)       <target type='serial' port='0'/>
	I0819 11:30:08.585706  121308 main.go:141] libmachine: (ha-503856-m02)     </console>
	I0819 11:30:08.585721  121308 main.go:141] libmachine: (ha-503856-m02)     <rng model='virtio'>
	I0819 11:30:08.585735  121308 main.go:141] libmachine: (ha-503856-m02)       <backend model='random'>/dev/random</backend>
	I0819 11:30:08.585745  121308 main.go:141] libmachine: (ha-503856-m02)     </rng>
	I0819 11:30:08.585756  121308 main.go:141] libmachine: (ha-503856-m02)     
	I0819 11:30:08.585763  121308 main.go:141] libmachine: (ha-503856-m02)     
	I0819 11:30:08.585770  121308 main.go:141] libmachine: (ha-503856-m02)   </devices>
	I0819 11:30:08.585779  121308 main.go:141] libmachine: (ha-503856-m02) </domain>
	I0819 11:30:08.585792  121308 main.go:141] libmachine: (ha-503856-m02) 
	I0819 11:30:08.592506  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:a8:75:24 in network default
	I0819 11:30:08.593200  121308 main.go:141] libmachine: (ha-503856-m02) Ensuring networks are active...
	I0819 11:30:08.593230  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:08.593954  121308 main.go:141] libmachine: (ha-503856-m02) Ensuring network default is active
	I0819 11:30:08.594280  121308 main.go:141] libmachine: (ha-503856-m02) Ensuring network mk-ha-503856 is active
	I0819 11:30:08.594611  121308 main.go:141] libmachine: (ha-503856-m02) Getting domain xml...
	I0819 11:30:08.595301  121308 main.go:141] libmachine: (ha-503856-m02) Creating domain...
	I0819 11:30:09.826428  121308 main.go:141] libmachine: (ha-503856-m02) Waiting to get IP...
	I0819 11:30:09.827339  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:09.827832  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:09.827867  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:09.827803  121667 retry.go:31] will retry after 299.927656ms: waiting for machine to come up
	I0819 11:30:10.129376  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:10.129988  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:10.130021  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:10.129940  121667 retry.go:31] will retry after 311.299317ms: waiting for machine to come up
	I0819 11:30:10.443603  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:10.443986  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:10.444012  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:10.443955  121667 retry.go:31] will retry after 295.003949ms: waiting for machine to come up
	I0819 11:30:10.740642  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:10.741084  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:10.741113  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:10.741048  121667 retry.go:31] will retry after 513.484638ms: waiting for machine to come up
	I0819 11:30:11.255793  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:11.256269  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:11.256294  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:11.256245  121667 retry.go:31] will retry after 566.925586ms: waiting for machine to come up
	I0819 11:30:11.825259  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:11.825767  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:11.825811  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:11.825738  121667 retry.go:31] will retry after 700.755721ms: waiting for machine to come up
	I0819 11:30:12.527531  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:12.528038  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:12.528065  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:12.527993  121667 retry.go:31] will retry after 797.139943ms: waiting for machine to come up
	I0819 11:30:13.326500  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:13.326995  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:13.327017  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:13.326953  121667 retry.go:31] will retry after 1.316756605s: waiting for machine to come up
	I0819 11:30:14.645396  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:14.645791  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:14.645825  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:14.645741  121667 retry.go:31] will retry after 1.440866555s: waiting for machine to come up
	I0819 11:30:16.088424  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:16.088883  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:16.088916  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:16.088837  121667 retry.go:31] will retry after 1.484428334s: waiting for machine to come up
	I0819 11:30:17.575583  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:17.576094  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:17.576117  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:17.576050  121667 retry.go:31] will retry after 1.746492547s: waiting for machine to come up
	I0819 11:30:19.324664  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:19.325115  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:19.325145  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:19.325073  121667 retry.go:31] will retry after 2.555649627s: waiting for machine to come up
	I0819 11:30:21.883814  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:21.884198  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:21.884223  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:21.884163  121667 retry.go:31] will retry after 4.287218616s: waiting for machine to come up
	I0819 11:30:26.174809  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:26.175121  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find current IP address of domain ha-503856-m02 in network mk-ha-503856
	I0819 11:30:26.175146  121308 main.go:141] libmachine: (ha-503856-m02) DBG | I0819 11:30:26.175084  121667 retry.go:31] will retry after 4.431060865s: waiting for machine to come up
	I0819 11:30:30.608735  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.609284  121308 main.go:141] libmachine: (ha-503856-m02) Found IP for machine: 192.168.39.183
	I0819 11:30:30.609311  121308 main.go:141] libmachine: (ha-503856-m02) Reserving static IP address...
	I0819 11:30:30.609326  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has current primary IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.609690  121308 main.go:141] libmachine: (ha-503856-m02) DBG | unable to find host DHCP lease matching {name: "ha-503856-m02", mac: "52:54:00:f7:a0:c4", ip: "192.168.39.183"} in network mk-ha-503856
	I0819 11:30:30.690434  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Getting to WaitForSSH function...
	I0819 11:30:30.690461  121308 main.go:141] libmachine: (ha-503856-m02) Reserved static IP address: 192.168.39.183
	I0819 11:30:30.690475  121308 main.go:141] libmachine: (ha-503856-m02) Waiting for SSH to be available...
	I0819 11:30:30.693230  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.693633  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:30.693665  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.693784  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Using SSH client type: external
	I0819 11:30:30.693811  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa (-rw-------)
	I0819 11:30:30.693843  121308 main.go:141] libmachine: (ha-503856-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 11:30:30.693854  121308 main.go:141] libmachine: (ha-503856-m02) DBG | About to run SSH command:
	I0819 11:30:30.693886  121308 main.go:141] libmachine: (ha-503856-m02) DBG | exit 0
	I0819 11:30:30.820077  121308 main.go:141] libmachine: (ha-503856-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 11:30:30.820545  121308 main.go:141] libmachine: (ha-503856-m02) KVM machine creation complete!
	I0819 11:30:30.820897  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetConfigRaw
	I0819 11:30:30.821423  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:30.821680  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:30.821884  121308 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 11:30:30.821898  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:30:30.823125  121308 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 11:30:30.823141  121308 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 11:30:30.823152  121308 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 11:30:30.823158  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:30.825412  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.825831  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:30.825870  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.825986  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:30.826173  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:30.826342  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:30.826472  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:30.826650  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:30.826858  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:30.826877  121308 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 11:30:30.934882  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:30:30.934908  121308 main.go:141] libmachine: Detecting the provisioner...
	I0819 11:30:30.934916  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:30.937760  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.938174  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:30.938201  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:30.938321  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:30.938551  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:30.938701  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:30.938946  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:30.939113  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:30.939291  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:30.939304  121308 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 11:30:31.048405  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 11:30:31.048491  121308 main.go:141] libmachine: found compatible host: buildroot
	I0819 11:30:31.048501  121308 main.go:141] libmachine: Provisioning with buildroot...
	I0819 11:30:31.048509  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetMachineName
	I0819 11:30:31.048768  121308 buildroot.go:166] provisioning hostname "ha-503856-m02"
	I0819 11:30:31.048797  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetMachineName
	I0819 11:30:31.048986  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.051548  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.051960  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.051995  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.052156  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.052353  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.052495  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.052744  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.052951  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:31.053118  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:31.053130  121308 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-503856-m02 && echo "ha-503856-m02" | sudo tee /etc/hostname
	I0819 11:30:31.173786  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856-m02
	
	I0819 11:30:31.173823  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.176922  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.177296  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.177326  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.177504  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.177745  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.177916  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.178069  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.178241  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:31.178409  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:31.178423  121308 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-503856-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-503856-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-503856-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:30:31.293191  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:30:31.293225  121308 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 11:30:31.293242  121308 buildroot.go:174] setting up certificates
	I0819 11:30:31.293256  121308 provision.go:84] configureAuth start
	I0819 11:30:31.293267  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetMachineName
	I0819 11:30:31.293589  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:30:31.296212  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.296559  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.296588  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.296783  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.299091  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.299458  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.299487  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.299640  121308 provision.go:143] copyHostCerts
	I0819 11:30:31.299670  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:30:31.299702  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 11:30:31.299710  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:30:31.299825  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 11:30:31.299948  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:30:31.299976  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 11:30:31.299984  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:30:31.300017  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 11:30:31.300074  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:30:31.300100  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 11:30:31.300110  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:30:31.300143  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 11:30:31.300218  121308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.ha-503856-m02 san=[127.0.0.1 192.168.39.183 ha-503856-m02 localhost minikube]
	I0819 11:30:31.427800  121308 provision.go:177] copyRemoteCerts
	I0819 11:30:31.427860  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:30:31.427888  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.430576  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.430972  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.430999  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.431226  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.431384  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.431573  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.431680  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:30:31.513374  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 11:30:31.513451  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:30:31.537929  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 11:30:31.538005  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:30:31.561451  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 11:30:31.561522  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:30:31.585683  121308 provision.go:87] duration metric: took 292.413889ms to configureAuth
	I0819 11:30:31.585715  121308 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:30:31.585891  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:31.585969  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.588785  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.589189  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.589220  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.589434  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.589671  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.589835  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.589966  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.590200  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:31.590361  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:31.590376  121308 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:30:31.858131  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:30:31.858162  121308 main.go:141] libmachine: Checking connection to Docker...
	I0819 11:30:31.858173  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetURL
	I0819 11:30:31.859585  121308 main.go:141] libmachine: (ha-503856-m02) DBG | Using libvirt version 6000000
	I0819 11:30:31.861824  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.862204  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.862229  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.862387  121308 main.go:141] libmachine: Docker is up and running!
	I0819 11:30:31.862401  121308 main.go:141] libmachine: Reticulating splines...
	I0819 11:30:31.862408  121308 client.go:171] duration metric: took 23.732169027s to LocalClient.Create
	I0819 11:30:31.862431  121308 start.go:167] duration metric: took 23.73222649s to libmachine.API.Create "ha-503856"
	I0819 11:30:31.862454  121308 start.go:293] postStartSetup for "ha-503856-m02" (driver="kvm2")
	I0819 11:30:31.862467  121308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:30:31.862485  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:31.862762  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:30:31.862790  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.865315  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.865638  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.865667  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.865870  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.866061  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.866206  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.866313  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:30:31.950447  121308 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:30:31.954913  121308 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:30:31.954939  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 11:30:31.955007  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 11:30:31.955098  121308 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 11:30:31.955111  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 11:30:31.955224  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:30:31.966063  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:30:31.989442  121308 start.go:296] duration metric: took 126.969365ms for postStartSetup
	I0819 11:30:31.989502  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetConfigRaw
	I0819 11:30:31.990112  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:30:31.992933  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.993258  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.993284  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.993519  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:30:31.993720  121308 start.go:128] duration metric: took 23.882946899s to createHost
	I0819 11:30:31.993743  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:31.995746  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.996103  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:31.996133  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:31.996339  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:31.996547  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.996739  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:31.996870  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:31.997017  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:30:31.997188  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0819 11:30:31.997199  121308 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:30:32.108440  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724067032.084837525
	
	I0819 11:30:32.108462  121308 fix.go:216] guest clock: 1724067032.084837525
	I0819 11:30:32.108472  121308 fix.go:229] Guest: 2024-08-19 11:30:32.084837525 +0000 UTC Remote: 2024-08-19 11:30:31.993731508 +0000 UTC m=+67.007093531 (delta=91.106017ms)
	I0819 11:30:32.108488  121308 fix.go:200] guest clock delta is within tolerance: 91.106017ms
	I0819 11:30:32.108493  121308 start.go:83] releasing machines lock for "ha-503856-m02", held for 23.997823637s
	I0819 11:30:32.108516  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:32.108789  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:30:32.111710  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.112085  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:32.112106  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.114598  121308 out.go:177] * Found network options:
	I0819 11:30:32.116087  121308 out.go:177]   - NO_PROXY=192.168.39.102
	W0819 11:30:32.117413  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 11:30:32.117452  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:32.118107  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:32.118324  121308 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:30:32.118429  121308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:30:32.118481  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	W0819 11:30:32.118501  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 11:30:32.118572  121308 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:30:32.118590  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:30:32.121159  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.121469  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:32.121497  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.121518  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.121619  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:32.121843  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:32.121942  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:32.121966  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:32.122007  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:32.122127  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:30:32.122192  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:30:32.122282  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:30:32.122415  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:30:32.122545  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:30:32.361893  121308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:30:32.367427  121308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:30:32.367508  121308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:30:32.383095  121308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:30:32.383128  121308 start.go:495] detecting cgroup driver to use...
	I0819 11:30:32.383213  121308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:30:32.399017  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:30:32.413333  121308 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:30:32.413391  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:30:32.427045  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:30:32.440483  121308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:30:32.554335  121308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:30:32.722708  121308 docker.go:233] disabling docker service ...
	I0819 11:30:32.722791  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:30:32.737323  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:30:32.750688  121308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:30:32.866584  121308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:30:33.000130  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:30:33.014527  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:30:33.033199  121308 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:30:33.033267  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.043906  121308 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:30:33.043988  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.054852  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.065887  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.076866  121308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:30:33.087958  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.098863  121308 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.116386  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:30:33.127225  121308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:30:33.137169  121308 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 11:30:33.137237  121308 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 11:30:33.151498  121308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:30:33.161812  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:30:33.283359  121308 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:30:33.415690  121308 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:30:33.415778  121308 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:30:33.420433  121308 start.go:563] Will wait 60s for crictl version
	I0819 11:30:33.420518  121308 ssh_runner.go:195] Run: which crictl
	I0819 11:30:33.424267  121308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:30:33.458933  121308 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:30:33.459018  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:30:33.487119  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:30:33.516093  121308 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:30:33.517495  121308 out.go:177]   - env NO_PROXY=192.168.39.102
	I0819 11:30:33.518782  121308 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:30:33.521533  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:33.521862  121308 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:30:22 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:30:33.521897  121308 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:30:33.522107  121308 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:30:33.526210  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:30:33.538699  121308 mustload.go:65] Loading cluster: ha-503856
	I0819 11:30:33.538932  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:33.539195  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:33.539224  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:33.554159  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0819 11:30:33.554634  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:33.555117  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:33.555136  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:33.555462  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:33.555695  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:30:33.557243  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:30:33.557531  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:33.557565  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:33.573352  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0819 11:30:33.573845  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:33.574317  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:33.574342  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:33.574702  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:33.574892  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:30:33.575054  121308 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856 for IP: 192.168.39.183
	I0819 11:30:33.575066  121308 certs.go:194] generating shared ca certs ...
	I0819 11:30:33.575080  121308 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:33.575215  121308 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 11:30:33.575253  121308 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 11:30:33.575262  121308 certs.go:256] generating profile certs ...
	I0819 11:30:33.575330  121308 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key
	I0819 11:30:33.575356  121308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.bedf2fd4
	I0819 11:30:33.575371  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.bedf2fd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.183 192.168.39.254]
	I0819 11:30:33.624010  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.bedf2fd4 ...
	I0819 11:30:33.624041  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.bedf2fd4: {Name:mke27ab3fb040d48d7c1cc01e78d7e4a453c8d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:33.624230  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.bedf2fd4 ...
	I0819 11:30:33.624247  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.bedf2fd4: {Name:mkc1e1747687a6a505ff57a429911599db31ccfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:33.624345  121308 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.bedf2fd4 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt
	I0819 11:30:33.624501  121308 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.bedf2fd4 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key
	I0819 11:30:33.624625  121308 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key
	I0819 11:30:33.624642  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 11:30:33.624655  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 11:30:33.624668  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 11:30:33.624679  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 11:30:33.624692  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 11:30:33.624705  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 11:30:33.624717  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 11:30:33.624727  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 11:30:33.624778  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 11:30:33.624820  121308 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 11:30:33.624830  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:30:33.624852  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:30:33.624873  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:30:33.624896  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 11:30:33.624935  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:30:33.624963  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 11:30:33.624976  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:30:33.624989  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 11:30:33.625021  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:30:33.627912  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:33.628320  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:30:33.628348  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:33.628482  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:30:33.628706  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:30:33.628840  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:30:33.628961  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:30:33.704219  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 11:30:33.708823  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 11:30:33.720378  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 11:30:33.724563  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 11:30:33.735612  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 11:30:33.739497  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 11:30:33.750157  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 11:30:33.754138  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 11:30:33.764679  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 11:30:33.768785  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 11:30:33.779059  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 11:30:33.782880  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 11:30:33.792984  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:30:33.818030  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:30:33.843321  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:30:33.866945  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:30:33.889780  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 11:30:33.912987  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:30:33.936231  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:30:33.959185  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:30:33.983213  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 11:30:34.007518  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:30:34.031985  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 11:30:34.056363  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 11:30:34.072884  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 11:30:34.089426  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 11:30:34.105655  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 11:30:34.121521  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 11:30:34.137617  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 11:30:34.153706  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 11:30:34.170550  121308 ssh_runner.go:195] Run: openssl version
	I0819 11:30:34.176187  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 11:30:34.187041  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 11:30:34.191426  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 11:30:34.191490  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 11:30:34.197138  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:30:34.208036  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:30:34.218818  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:30:34.223221  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:30:34.223308  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:30:34.228830  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:30:34.239404  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 11:30:34.250261  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 11:30:34.254657  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 11:30:34.254718  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 11:30:34.260583  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 11:30:34.272616  121308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:30:34.276768  121308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:30:34.276825  121308 kubeadm.go:934] updating node {m02 192.168.39.183 8443 v1.31.0 crio true true} ...
	I0819 11:30:34.276909  121308 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-503856-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:30:34.276932  121308 kube-vip.go:115] generating kube-vip config ...
	I0819 11:30:34.276969  121308 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 11:30:34.294422  121308 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 11:30:34.294499  121308 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 11:30:34.294587  121308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:30:34.304718  121308 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 11:30:34.304797  121308 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 11:30:34.314847  121308 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 11:30:34.314867  121308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 11:30:34.314886  121308 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 11:30:34.314894  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 11:30:34.314971  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 11:30:34.319847  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 11:30:34.319891  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 11:30:35.010437  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 11:30:35.010521  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 11:30:35.015216  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 11:30:35.015258  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 11:30:36.967376  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:30:36.982016  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 11:30:36.982125  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 11:30:36.986462  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 11:30:36.986505  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 11:30:37.289237  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 11:30:37.298490  121308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 11:30:37.315129  121308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:30:37.332266  121308 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 11:30:37.349240  121308 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 11:30:37.353180  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:30:37.365239  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:30:37.485693  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:30:37.502177  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:30:37.502548  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:30:37.502587  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:30:37.517697  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0819 11:30:37.518244  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:30:37.518757  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:30:37.518779  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:30:37.519088  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:30:37.519270  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:30:37.519420  121308 start.go:317] joinCluster: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:37.519546  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 11:30:37.519569  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:30:37.522799  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:37.523291  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:30:37.523322  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:30:37.523498  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:30:37.523666  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:30:37.523837  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:30:37.524029  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:30:37.659431  121308 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:30:37.659495  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tahy39.ibnafofoxyrqjcwr --discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-503856-m02 --control-plane --apiserver-advertise-address=192.168.39.183 --apiserver-bind-port=8443"
	I0819 11:30:58.089041  121308 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tahy39.ibnafofoxyrqjcwr --discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-503856-m02 --control-plane --apiserver-advertise-address=192.168.39.183 --apiserver-bind-port=8443": (20.429515989s)
	I0819 11:30:58.089098  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 11:30:58.629851  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-503856-m02 minikube.k8s.io/updated_at=2024_08_19T11_30_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=ha-503856 minikube.k8s.io/primary=false
	I0819 11:30:58.742551  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-503856-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 11:30:58.868205  121308 start.go:319] duration metric: took 21.348781814s to joinCluster
	I0819 11:30:58.868289  121308 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:30:58.868567  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:30:58.869796  121308 out.go:177] * Verifying Kubernetes components...
	I0819 11:30:58.870853  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:30:59.107866  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:30:59.149279  121308 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:30:59.149724  121308 kapi.go:59] client config for ha-503856: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt", KeyFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key", CAFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 11:30:59.149883  121308 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0819 11:30:59.150229  121308 node_ready.go:35] waiting up to 6m0s for node "ha-503856-m02" to be "Ready" ...
	I0819 11:30:59.150352  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:30:59.150365  121308 round_trippers.go:469] Request Headers:
	I0819 11:30:59.150377  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:30:59.150386  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:30:59.162160  121308 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 11:30:59.651159  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:30:59.651190  121308 round_trippers.go:469] Request Headers:
	I0819 11:30:59.651202  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:30:59.651207  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:30:59.655026  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:00.150666  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:00.150695  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:00.150707  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:00.150715  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:00.155129  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:00.651313  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:00.651336  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:00.651345  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:00.651349  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:00.654727  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:01.150538  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:01.150562  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:01.150576  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:01.150582  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:01.153748  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:01.154496  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:01.650881  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:01.650909  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:01.650923  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:01.650929  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:01.654267  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:02.150456  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:02.150483  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:02.150491  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:02.150495  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:02.154067  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:02.651456  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:02.651479  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:02.651489  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:02.651493  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:02.654896  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:03.151338  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:03.151362  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:03.151371  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:03.151377  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:03.156263  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:03.156754  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:03.651095  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:03.651119  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:03.651127  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:03.651132  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:03.654480  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:04.151447  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:04.151469  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:04.151477  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:04.151480  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:04.154732  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:04.650480  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:04.650504  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:04.650515  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:04.650520  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:04.653989  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:05.151060  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:05.151093  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:05.151102  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:05.151107  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:05.155258  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:05.651366  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:05.651390  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:05.651398  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:05.651402  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:05.654773  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:05.655249  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:06.150660  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:06.150685  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:06.150696  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:06.150701  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:06.157514  121308 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 11:31:06.650708  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:06.650733  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:06.650741  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:06.650746  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:06.653917  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:07.150807  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:07.150832  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:07.150840  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:07.150846  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:07.154267  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:07.651168  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:07.651193  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:07.651202  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:07.651207  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:07.654361  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:08.150812  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:08.150842  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:08.150855  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:08.150860  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:08.154188  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:08.154699  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:08.651213  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:08.651236  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:08.651245  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:08.651248  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:08.658911  121308 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 11:31:09.150755  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:09.150787  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:09.150795  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:09.150799  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:09.274908  121308 round_trippers.go:574] Response Status: 200 OK in 124 milliseconds
	I0819 11:31:09.650556  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:09.650582  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:09.650590  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:09.650596  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:09.653939  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:10.151431  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:10.151459  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:10.151469  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:10.151474  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:10.154606  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:10.155127  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:10.650536  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:10.650562  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:10.650571  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:10.650575  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:10.654012  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:11.150877  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:11.150902  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:11.150911  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:11.150915  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:11.154176  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:11.651206  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:11.651229  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:11.651237  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:11.651240  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:11.654428  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:12.150531  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:12.150555  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:12.150563  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:12.150568  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:12.154577  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:12.155213  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:12.650918  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:12.650944  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:12.650954  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:12.650959  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:12.654346  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:13.151261  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:13.151283  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:13.151291  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:13.151296  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:13.154547  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:13.650503  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:13.650528  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:13.650540  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:13.650553  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:13.653918  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:14.150682  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:14.150705  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:14.150713  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:14.150717  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:14.154249  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:14.651367  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:14.651392  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:14.651401  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:14.651407  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:14.654690  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:14.655212  121308 node_ready.go:53] node "ha-503856-m02" has status "Ready":"False"
	I0819 11:31:15.150779  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:15.150803  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:15.150812  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:15.150818  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:15.154018  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:15.651095  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:15.651120  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:15.651128  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:15.651132  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:15.654842  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.150783  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:16.150813  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.150824  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.150831  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.154407  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.154962  121308 node_ready.go:49] node "ha-503856-m02" has status "Ready":"True"
	I0819 11:31:16.154985  121308 node_ready.go:38] duration metric: took 17.004735248s for node "ha-503856-m02" to be "Ready" ...
	I0819 11:31:16.154996  121308 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:31:16.155096  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:16.155107  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.155122  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.155129  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.158937  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.165497  121308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.165598  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2jdlw
	I0819 11:31:16.165607  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.165615  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.165620  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.168516  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:16.169237  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.169254  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.169263  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.169270  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.171806  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:16.172383  121308 pod_ready.go:93] pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.172403  121308 pod_ready.go:82] duration metric: took 6.87663ms for pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.172413  121308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.172469  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-5dbrz
	I0819 11:31:16.172477  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.172484  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.172489  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.174739  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:16.175483  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.175502  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.175510  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.175518  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.179447  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.180253  121308 pod_ready.go:93] pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.180285  121308 pod_ready.go:82] duration metric: took 7.864672ms for pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.180308  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.180389  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856
	I0819 11:31:16.180400  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.180410  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.180419  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.182976  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:16.183903  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.183922  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.183933  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.183942  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.186985  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.187785  121308 pod_ready.go:93] pod "etcd-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.187805  121308 pod_ready.go:82] duration metric: took 7.485597ms for pod "etcd-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.187819  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.187888  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856-m02
	I0819 11:31:16.187898  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.187910  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.187917  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.192410  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:16.193005  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:16.193023  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.193031  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.193034  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.196077  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.196798  121308 pod_ready.go:93] pod "etcd-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.196825  121308 pod_ready.go:82] duration metric: took 8.996105ms for pod "etcd-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.196847  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.351291  121308 request.go:632] Waited for 154.366111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856
	I0819 11:31:16.351404  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856
	I0819 11:31:16.351416  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.351430  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.351437  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.354928  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.550888  121308 request.go:632] Waited for 195.31773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.550974  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:16.550979  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.550987  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.550993  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.554225  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.554734  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.554759  121308 pod_ready.go:82] duration metric: took 357.898517ms for pod "kube-apiserver-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.554772  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.750863  121308 request.go:632] Waited for 195.998191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m02
	I0819 11:31:16.750924  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m02
	I0819 11:31:16.750930  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.750944  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.750948  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.754344  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.951335  121308 request.go:632] Waited for 196.35381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:16.951399  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:16.951404  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:16.951412  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:16.951416  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:16.954688  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:16.955119  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:16.955144  121308 pod_ready.go:82] duration metric: took 400.364836ms for pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:16.955154  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.151352  121308 request.go:632] Waited for 196.120337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856
	I0819 11:31:17.151448  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856
	I0819 11:31:17.151455  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.151466  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.151474  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.160599  121308 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 11:31:17.351367  121308 request.go:632] Waited for 190.030222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:17.351446  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:17.351452  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.351460  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.351463  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.354601  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:17.355202  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:17.355226  121308 pod_ready.go:82] duration metric: took 400.064759ms for pod "kube-controller-manager-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.355241  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.551783  121308 request.go:632] Waited for 196.422792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m02
	I0819 11:31:17.551843  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m02
	I0819 11:31:17.551849  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.551856  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.551860  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.555327  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:17.751515  121308 request.go:632] Waited for 195.387334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:17.751591  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:17.751599  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.751609  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.751615  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.755043  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:17.755640  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:17.755665  121308 pod_ready.go:82] duration metric: took 400.408914ms for pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.755678  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d6zw9" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:17.951776  121308 request.go:632] Waited for 195.987415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6zw9
	I0819 11:31:17.951841  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6zw9
	I0819 11:31:17.951846  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:17.951854  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:17.951858  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:17.955056  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.151229  121308 request.go:632] Waited for 195.547001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:18.151317  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:18.151324  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.151334  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.151341  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.154145  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:31:18.154647  121308 pod_ready.go:93] pod "kube-proxy-d6zw9" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:18.154667  121308 pod_ready.go:82] duration metric: took 398.981566ms for pod "kube-proxy-d6zw9" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.154677  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2f6h" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.350839  121308 request.go:632] Waited for 196.063612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2f6h
	I0819 11:31:18.350909  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2f6h
	I0819 11:31:18.350914  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.350922  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.350927  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.354241  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.551204  121308 request.go:632] Waited for 196.370534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:18.551264  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:18.551269  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.551278  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.551282  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.554393  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.554898  121308 pod_ready.go:93] pod "kube-proxy-j2f6h" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:18.554920  121308 pod_ready.go:82] duration metric: took 400.236586ms for pod "kube-proxy-j2f6h" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.554934  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.751810  121308 request.go:632] Waited for 196.801696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856
	I0819 11:31:18.751869  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856
	I0819 11:31:18.751874  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.751882  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.751888  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.755305  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.951310  121308 request.go:632] Waited for 195.40754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:18.951382  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:31:18.951388  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:18.951395  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:18.951401  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:18.954645  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:18.955169  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:18.955187  121308 pod_ready.go:82] duration metric: took 400.245984ms for pod "kube-scheduler-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:18.955199  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:19.151310  121308 request.go:632] Waited for 196.038831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m02
	I0819 11:31:19.151387  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m02
	I0819 11:31:19.151395  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.151403  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.151406  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.154591  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:19.351614  121308 request.go:632] Waited for 196.434555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:19.351693  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:31:19.351699  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.351706  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.351709  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.354955  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:19.355610  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:31:19.355629  121308 pod_ready.go:82] duration metric: took 400.422835ms for pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:31:19.355640  121308 pod_ready.go:39] duration metric: took 3.200617934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:31:19.355656  121308 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:31:19.355710  121308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:31:19.369644  121308 api_server.go:72] duration metric: took 20.501314219s to wait for apiserver process to appear ...
	I0819 11:31:19.369681  121308 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:31:19.369706  121308 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0819 11:31:19.374147  121308 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0819 11:31:19.374237  121308 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0819 11:31:19.374249  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.374260  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.374266  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.375027  121308 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 11:31:19.375130  121308 api_server.go:141] control plane version: v1.31.0
	I0819 11:31:19.375149  121308 api_server.go:131] duration metric: took 5.461132ms to wait for apiserver health ...
	I0819 11:31:19.375157  121308 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 11:31:19.551540  121308 request.go:632] Waited for 176.300465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:19.551635  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:19.551643  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.551650  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.551655  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.556172  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:19.562111  121308 system_pods.go:59] 17 kube-system pods found
	I0819 11:31:19.562148  121308 system_pods.go:61] "coredns-6f6b679f8f-2jdlw" [ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd] Running
	I0819 11:31:19.562153  121308 system_pods.go:61] "coredns-6f6b679f8f-5dbrz" [5530828e-1061-434c-ad2f-80847f3ab171] Running
	I0819 11:31:19.562157  121308 system_pods.go:61] "etcd-ha-503856" [b8932b07-bc71-4d14-bc4c-a323aa900891] Running
	I0819 11:31:19.562160  121308 system_pods.go:61] "etcd-ha-503856-m02" [7c495867-e51d-4100-b0d8-2794e45a18c4] Running
	I0819 11:31:19.562163  121308 system_pods.go:61] "kindnet-rnjwj" [1a6e4b0d-f3f2-45e3-b66e-b0457ba61723] Running
	I0819 11:31:19.562166  121308 system_pods.go:61] "kindnet-st2mx" [99e7c93b-40a9-4902-b1a5-5a6bcc55735c] Running
	I0819 11:31:19.562169  121308 system_pods.go:61] "kube-apiserver-ha-503856" [bdea9580-2d12-4e91-acbd-5a5e08f5637c] Running
	I0819 11:31:19.562172  121308 system_pods.go:61] "kube-apiserver-ha-503856-m02" [a1d5950d-50bc-42e8-b432-27425aa4b80d] Running
	I0819 11:31:19.562175  121308 system_pods.go:61] "kube-controller-manager-ha-503856" [36c9c0c5-0b9e-4fce-a34f-bf1c21590af4] Running
	I0819 11:31:19.562179  121308 system_pods.go:61] "kube-controller-manager-ha-503856-m02" [a58cf93b-47a4-4cb7-80e1-afb525b1a2b2] Running
	I0819 11:31:19.562182  121308 system_pods.go:61] "kube-proxy-d6zw9" [f8054009-c06a-4ccc-b6c4-22e0f6bb632a] Running
	I0819 11:31:19.562184  121308 system_pods.go:61] "kube-proxy-j2f6h" [e9623c18-7b96-49b5-8cc6-6ea700eec47e] Running
	I0819 11:31:19.562187  121308 system_pods.go:61] "kube-scheduler-ha-503856" [2c8c7e78-ded0-47ff-8720-b1c36c9123c6] Running
	I0819 11:31:19.562190  121308 system_pods.go:61] "kube-scheduler-ha-503856-m02" [6f51735c-0f3e-49f8-aff7-c6c485e0e653] Running
	I0819 11:31:19.562193  121308 system_pods.go:61] "kube-vip-ha-503856" [a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8] Running
	I0819 11:31:19.562197  121308 system_pods.go:61] "kube-vip-ha-503856-m02" [5d66ea23-6878-403f-88df-94bf42ad5800] Running
	I0819 11:31:19.562200  121308 system_pods.go:61] "storage-provisioner" [4c212413-ac90-45fb-92de-bfd9e9115540] Running
	I0819 11:31:19.562206  121308 system_pods.go:74] duration metric: took 187.040317ms to wait for pod list to return data ...
	I0819 11:31:19.562216  121308 default_sa.go:34] waiting for default service account to be created ...
	I0819 11:31:19.751666  121308 request.go:632] Waited for 189.372257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0819 11:31:19.751738  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0819 11:31:19.751744  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.751752  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.751757  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.755027  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:31:19.755270  121308 default_sa.go:45] found service account: "default"
	I0819 11:31:19.755290  121308 default_sa.go:55] duration metric: took 193.066823ms for default service account to be created ...
	I0819 11:31:19.755300  121308 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 11:31:19.951772  121308 request.go:632] Waited for 196.382531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:19.951856  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:31:19.951861  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:19.951872  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:19.951875  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:19.956631  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:19.962576  121308 system_pods.go:86] 17 kube-system pods found
	I0819 11:31:19.962609  121308 system_pods.go:89] "coredns-6f6b679f8f-2jdlw" [ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd] Running
	I0819 11:31:19.962615  121308 system_pods.go:89] "coredns-6f6b679f8f-5dbrz" [5530828e-1061-434c-ad2f-80847f3ab171] Running
	I0819 11:31:19.962619  121308 system_pods.go:89] "etcd-ha-503856" [b8932b07-bc71-4d14-bc4c-a323aa900891] Running
	I0819 11:31:19.962623  121308 system_pods.go:89] "etcd-ha-503856-m02" [7c495867-e51d-4100-b0d8-2794e45a18c4] Running
	I0819 11:31:19.962627  121308 system_pods.go:89] "kindnet-rnjwj" [1a6e4b0d-f3f2-45e3-b66e-b0457ba61723] Running
	I0819 11:31:19.962630  121308 system_pods.go:89] "kindnet-st2mx" [99e7c93b-40a9-4902-b1a5-5a6bcc55735c] Running
	I0819 11:31:19.962634  121308 system_pods.go:89] "kube-apiserver-ha-503856" [bdea9580-2d12-4e91-acbd-5a5e08f5637c] Running
	I0819 11:31:19.962637  121308 system_pods.go:89] "kube-apiserver-ha-503856-m02" [a1d5950d-50bc-42e8-b432-27425aa4b80d] Running
	I0819 11:31:19.962641  121308 system_pods.go:89] "kube-controller-manager-ha-503856" [36c9c0c5-0b9e-4fce-a34f-bf1c21590af4] Running
	I0819 11:31:19.962644  121308 system_pods.go:89] "kube-controller-manager-ha-503856-m02" [a58cf93b-47a4-4cb7-80e1-afb525b1a2b2] Running
	I0819 11:31:19.962647  121308 system_pods.go:89] "kube-proxy-d6zw9" [f8054009-c06a-4ccc-b6c4-22e0f6bb632a] Running
	I0819 11:31:19.962650  121308 system_pods.go:89] "kube-proxy-j2f6h" [e9623c18-7b96-49b5-8cc6-6ea700eec47e] Running
	I0819 11:31:19.962653  121308 system_pods.go:89] "kube-scheduler-ha-503856" [2c8c7e78-ded0-47ff-8720-b1c36c9123c6] Running
	I0819 11:31:19.962655  121308 system_pods.go:89] "kube-scheduler-ha-503856-m02" [6f51735c-0f3e-49f8-aff7-c6c485e0e653] Running
	I0819 11:31:19.962658  121308 system_pods.go:89] "kube-vip-ha-503856" [a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8] Running
	I0819 11:31:19.962661  121308 system_pods.go:89] "kube-vip-ha-503856-m02" [5d66ea23-6878-403f-88df-94bf42ad5800] Running
	I0819 11:31:19.962664  121308 system_pods.go:89] "storage-provisioner" [4c212413-ac90-45fb-92de-bfd9e9115540] Running
	I0819 11:31:19.962669  121308 system_pods.go:126] duration metric: took 207.363242ms to wait for k8s-apps to be running ...
	I0819 11:31:19.962677  121308 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 11:31:19.962731  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:31:19.976870  121308 system_svc.go:56] duration metric: took 14.172102ms WaitForService to wait for kubelet
	I0819 11:31:19.976907  121308 kubeadm.go:582] duration metric: took 21.1085779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:31:19.976927  121308 node_conditions.go:102] verifying NodePressure condition ...
	I0819 11:31:20.150814  121308 request.go:632] Waited for 173.793312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0819 11:31:20.150901  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0819 11:31:20.150909  121308 round_trippers.go:469] Request Headers:
	I0819 11:31:20.150921  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:31:20.150933  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:31:20.155101  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:31:20.156014  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:31:20.156041  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:31:20.156052  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:31:20.156057  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:31:20.156061  121308 node_conditions.go:105] duration metric: took 179.129515ms to run NodePressure ...
	I0819 11:31:20.156076  121308 start.go:241] waiting for startup goroutines ...
	I0819 11:31:20.156103  121308 start.go:255] writing updated cluster config ...
	I0819 11:31:20.157909  121308 out.go:201] 
	I0819 11:31:20.159418  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:31:20.159527  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:31:20.161292  121308 out.go:177] * Starting "ha-503856-m03" control-plane node in "ha-503856" cluster
	I0819 11:31:20.162693  121308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:31:20.162731  121308 cache.go:56] Caching tarball of preloaded images
	I0819 11:31:20.162861  121308 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:31:20.162873  121308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:31:20.162976  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:31:20.163171  121308 start.go:360] acquireMachinesLock for ha-503856-m03: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:31:20.163214  121308 start.go:364] duration metric: took 22.017µs to acquireMachinesLock for "ha-503856-m03"
	I0819 11:31:20.163233  121308 start.go:93] Provisioning new machine with config: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:31:20.163331  121308 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 11:31:20.165351  121308 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:31:20.165454  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:31:20.165502  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:31:20.181094  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I0819 11:31:20.181520  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:31:20.182029  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:31:20.182048  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:31:20.182422  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:31:20.182743  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetMachineName
	I0819 11:31:20.183067  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:20.183273  121308 start.go:159] libmachine.API.Create for "ha-503856" (driver="kvm2")
	I0819 11:31:20.183308  121308 client.go:168] LocalClient.Create starting
	I0819 11:31:20.183352  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 11:31:20.183401  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:31:20.183423  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:31:20.183489  121308 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 11:31:20.183533  121308 main.go:141] libmachine: Decoding PEM data...
	I0819 11:31:20.183550  121308 main.go:141] libmachine: Parsing certificate...
	I0819 11:31:20.183577  121308 main.go:141] libmachine: Running pre-create checks...
	I0819 11:31:20.183590  121308 main.go:141] libmachine: (ha-503856-m03) Calling .PreCreateCheck
	I0819 11:31:20.183792  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetConfigRaw
	I0819 11:31:20.184304  121308 main.go:141] libmachine: Creating machine...
	I0819 11:31:20.184324  121308 main.go:141] libmachine: (ha-503856-m03) Calling .Create
	I0819 11:31:20.184512  121308 main.go:141] libmachine: (ha-503856-m03) Creating KVM machine...
	I0819 11:31:20.185960  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found existing default KVM network
	I0819 11:31:20.186120  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found existing private KVM network mk-ha-503856
	I0819 11:31:20.186273  121308 main.go:141] libmachine: (ha-503856-m03) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03 ...
	I0819 11:31:20.186298  121308 main.go:141] libmachine: (ha-503856-m03) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 11:31:20.186377  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:20.186275  122066 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:31:20.186508  121308 main.go:141] libmachine: (ha-503856-m03) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:31:20.443661  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:20.443500  122066 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa...
	I0819 11:31:20.771388  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:20.771264  122066 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/ha-503856-m03.rawdisk...
	I0819 11:31:20.771422  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Writing magic tar header
	I0819 11:31:20.771436  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Writing SSH key tar header
	I0819 11:31:20.771447  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:20.771396  122066 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03 ...
	I0819 11:31:20.771572  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03
	I0819 11:31:20.771599  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 11:31:20.771617  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03 (perms=drwx------)
	I0819 11:31:20.771632  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 11:31:20.771646  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 11:31:20.771660  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 11:31:20.771670  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 11:31:20.771682  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:31:20.771697  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 11:31:20.771706  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 11:31:20.771715  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 11:31:20.771746  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Checking permissions on dir: /home
	I0819 11:31:20.771764  121308 main.go:141] libmachine: (ha-503856-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 11:31:20.771775  121308 main.go:141] libmachine: (ha-503856-m03) Creating domain...
	I0819 11:31:20.771788  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Skipping /home - not owner
	I0819 11:31:20.772777  121308 main.go:141] libmachine: (ha-503856-m03) define libvirt domain using xml: 
	I0819 11:31:20.772803  121308 main.go:141] libmachine: (ha-503856-m03) <domain type='kvm'>
	I0819 11:31:20.772813  121308 main.go:141] libmachine: (ha-503856-m03)   <name>ha-503856-m03</name>
	I0819 11:31:20.772821  121308 main.go:141] libmachine: (ha-503856-m03)   <memory unit='MiB'>2200</memory>
	I0819 11:31:20.772833  121308 main.go:141] libmachine: (ha-503856-m03)   <vcpu>2</vcpu>
	I0819 11:31:20.772846  121308 main.go:141] libmachine: (ha-503856-m03)   <features>
	I0819 11:31:20.772862  121308 main.go:141] libmachine: (ha-503856-m03)     <acpi/>
	I0819 11:31:20.772875  121308 main.go:141] libmachine: (ha-503856-m03)     <apic/>
	I0819 11:31:20.772915  121308 main.go:141] libmachine: (ha-503856-m03)     <pae/>
	I0819 11:31:20.772943  121308 main.go:141] libmachine: (ha-503856-m03)     
	I0819 11:31:20.772955  121308 main.go:141] libmachine: (ha-503856-m03)   </features>
	I0819 11:31:20.772964  121308 main.go:141] libmachine: (ha-503856-m03)   <cpu mode='host-passthrough'>
	I0819 11:31:20.772974  121308 main.go:141] libmachine: (ha-503856-m03)   
	I0819 11:31:20.772984  121308 main.go:141] libmachine: (ha-503856-m03)   </cpu>
	I0819 11:31:20.772993  121308 main.go:141] libmachine: (ha-503856-m03)   <os>
	I0819 11:31:20.773003  121308 main.go:141] libmachine: (ha-503856-m03)     <type>hvm</type>
	I0819 11:31:20.773011  121308 main.go:141] libmachine: (ha-503856-m03)     <boot dev='cdrom'/>
	I0819 11:31:20.773025  121308 main.go:141] libmachine: (ha-503856-m03)     <boot dev='hd'/>
	I0819 11:31:20.773039  121308 main.go:141] libmachine: (ha-503856-m03)     <bootmenu enable='no'/>
	I0819 11:31:20.773049  121308 main.go:141] libmachine: (ha-503856-m03)   </os>
	I0819 11:31:20.773070  121308 main.go:141] libmachine: (ha-503856-m03)   <devices>
	I0819 11:31:20.773083  121308 main.go:141] libmachine: (ha-503856-m03)     <disk type='file' device='cdrom'>
	I0819 11:31:20.773119  121308 main.go:141] libmachine: (ha-503856-m03)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/boot2docker.iso'/>
	I0819 11:31:20.773145  121308 main.go:141] libmachine: (ha-503856-m03)       <target dev='hdc' bus='scsi'/>
	I0819 11:31:20.773172  121308 main.go:141] libmachine: (ha-503856-m03)       <readonly/>
	I0819 11:31:20.773197  121308 main.go:141] libmachine: (ha-503856-m03)     </disk>
	I0819 11:31:20.773212  121308 main.go:141] libmachine: (ha-503856-m03)     <disk type='file' device='disk'>
	I0819 11:31:20.773222  121308 main.go:141] libmachine: (ha-503856-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 11:31:20.773238  121308 main.go:141] libmachine: (ha-503856-m03)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/ha-503856-m03.rawdisk'/>
	I0819 11:31:20.773248  121308 main.go:141] libmachine: (ha-503856-m03)       <target dev='hda' bus='virtio'/>
	I0819 11:31:20.773254  121308 main.go:141] libmachine: (ha-503856-m03)     </disk>
	I0819 11:31:20.773261  121308 main.go:141] libmachine: (ha-503856-m03)     <interface type='network'>
	I0819 11:31:20.773269  121308 main.go:141] libmachine: (ha-503856-m03)       <source network='mk-ha-503856'/>
	I0819 11:31:20.773285  121308 main.go:141] libmachine: (ha-503856-m03)       <model type='virtio'/>
	I0819 11:31:20.773295  121308 main.go:141] libmachine: (ha-503856-m03)     </interface>
	I0819 11:31:20.773305  121308 main.go:141] libmachine: (ha-503856-m03)     <interface type='network'>
	I0819 11:31:20.773315  121308 main.go:141] libmachine: (ha-503856-m03)       <source network='default'/>
	I0819 11:31:20.773326  121308 main.go:141] libmachine: (ha-503856-m03)       <model type='virtio'/>
	I0819 11:31:20.773334  121308 main.go:141] libmachine: (ha-503856-m03)     </interface>
	I0819 11:31:20.773344  121308 main.go:141] libmachine: (ha-503856-m03)     <serial type='pty'>
	I0819 11:31:20.773353  121308 main.go:141] libmachine: (ha-503856-m03)       <target port='0'/>
	I0819 11:31:20.773364  121308 main.go:141] libmachine: (ha-503856-m03)     </serial>
	I0819 11:31:20.773380  121308 main.go:141] libmachine: (ha-503856-m03)     <console type='pty'>
	I0819 11:31:20.773388  121308 main.go:141] libmachine: (ha-503856-m03)       <target type='serial' port='0'/>
	I0819 11:31:20.773396  121308 main.go:141] libmachine: (ha-503856-m03)     </console>
	I0819 11:31:20.773404  121308 main.go:141] libmachine: (ha-503856-m03)     <rng model='virtio'>
	I0819 11:31:20.773418  121308 main.go:141] libmachine: (ha-503856-m03)       <backend model='random'>/dev/random</backend>
	I0819 11:31:20.773424  121308 main.go:141] libmachine: (ha-503856-m03)     </rng>
	I0819 11:31:20.773431  121308 main.go:141] libmachine: (ha-503856-m03)     
	I0819 11:31:20.773441  121308 main.go:141] libmachine: (ha-503856-m03)     
	I0819 11:31:20.773450  121308 main.go:141] libmachine: (ha-503856-m03)   </devices>
	I0819 11:31:20.773460  121308 main.go:141] libmachine: (ha-503856-m03) </domain>
	I0819 11:31:20.773472  121308 main.go:141] libmachine: (ha-503856-m03) 
	I0819 11:31:20.780669  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:61:14:39 in network default
	I0819 11:31:20.781385  121308 main.go:141] libmachine: (ha-503856-m03) Ensuring networks are active...
	I0819 11:31:20.781407  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:20.782170  121308 main.go:141] libmachine: (ha-503856-m03) Ensuring network default is active
	I0819 11:31:20.782550  121308 main.go:141] libmachine: (ha-503856-m03) Ensuring network mk-ha-503856 is active
	I0819 11:31:20.782945  121308 main.go:141] libmachine: (ha-503856-m03) Getting domain xml...
	I0819 11:31:20.783585  121308 main.go:141] libmachine: (ha-503856-m03) Creating domain...
	I0819 11:31:22.039720  121308 main.go:141] libmachine: (ha-503856-m03) Waiting to get IP...
	I0819 11:31:22.040528  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:22.040945  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:22.040966  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:22.040924  122066 retry.go:31] will retry after 197.841944ms: waiting for machine to come up
	I0819 11:31:22.241064  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:22.241577  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:22.241600  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:22.241539  122066 retry.go:31] will retry after 324.078324ms: waiting for machine to come up
	I0819 11:31:22.566780  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:22.567224  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:22.567256  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:22.567172  122066 retry.go:31] will retry after 402.796459ms: waiting for machine to come up
	I0819 11:31:22.971719  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:22.972183  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:22.972213  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:22.972138  122066 retry.go:31] will retry after 566.878257ms: waiting for machine to come up
	I0819 11:31:23.541156  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:23.541766  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:23.541790  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:23.541688  122066 retry.go:31] will retry after 628.56629ms: waiting for machine to come up
	I0819 11:31:24.171757  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:24.172252  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:24.172277  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:24.172176  122066 retry.go:31] will retry after 885.590988ms: waiting for machine to come up
	I0819 11:31:25.059781  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:25.060341  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:25.060380  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:25.060286  122066 retry.go:31] will retry after 741.397234ms: waiting for machine to come up
	I0819 11:31:25.803145  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:25.803550  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:25.803590  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:25.803518  122066 retry.go:31] will retry after 991.895752ms: waiting for machine to come up
	I0819 11:31:26.796731  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:26.797190  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:26.797212  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:26.797153  122066 retry.go:31] will retry after 1.506964408s: waiting for machine to come up
	I0819 11:31:28.305505  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:28.305948  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:28.305985  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:28.305898  122066 retry.go:31] will retry after 1.478403756s: waiting for machine to come up
	I0819 11:31:29.785666  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:29.786262  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:29.786298  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:29.786206  122066 retry.go:31] will retry after 2.112030077s: waiting for machine to come up
	I0819 11:31:31.900436  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:31.900863  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:31.900891  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:31.900814  122066 retry.go:31] will retry after 3.559996961s: waiting for machine to come up
	I0819 11:31:35.462660  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:35.463208  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:35.463235  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:35.463133  122066 retry.go:31] will retry after 4.366334624s: waiting for machine to come up
	I0819 11:31:39.834601  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:39.835050  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find current IP address of domain ha-503856-m03 in network mk-ha-503856
	I0819 11:31:39.835081  121308 main.go:141] libmachine: (ha-503856-m03) DBG | I0819 11:31:39.835002  122066 retry.go:31] will retry after 3.604040354s: waiting for machine to come up
	I0819 11:31:43.440818  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.441291  121308 main.go:141] libmachine: (ha-503856-m03) Found IP for machine: 192.168.39.122
	I0819 11:31:43.441316  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has current primary IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.441322  121308 main.go:141] libmachine: (ha-503856-m03) Reserving static IP address...
	I0819 11:31:43.441667  121308 main.go:141] libmachine: (ha-503856-m03) DBG | unable to find host DHCP lease matching {name: "ha-503856-m03", mac: "52:54:00:10:1f:39", ip: "192.168.39.122"} in network mk-ha-503856
	I0819 11:31:43.521399  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Getting to WaitForSSH function...
	I0819 11:31:43.521430  121308 main.go:141] libmachine: (ha-503856-m03) Reserved static IP address: 192.168.39.122
	I0819 11:31:43.521441  121308 main.go:141] libmachine: (ha-503856-m03) Waiting for SSH to be available...
	I0819 11:31:43.524277  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.524679  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.524710  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.524833  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Using SSH client type: external
	I0819 11:31:43.524859  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa (-rw-------)
	I0819 11:31:43.524915  121308 main.go:141] libmachine: (ha-503856-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.122 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 11:31:43.524934  121308 main.go:141] libmachine: (ha-503856-m03) DBG | About to run SSH command:
	I0819 11:31:43.524948  121308 main.go:141] libmachine: (ha-503856-m03) DBG | exit 0
	I0819 11:31:43.647763  121308 main.go:141] libmachine: (ha-503856-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 11:31:43.648038  121308 main.go:141] libmachine: (ha-503856-m03) KVM machine creation complete!
	I0819 11:31:43.648355  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetConfigRaw
	I0819 11:31:43.648912  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:43.649105  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:43.649255  121308 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 11:31:43.649270  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:31:43.650382  121308 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 11:31:43.650395  121308 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 11:31:43.650401  121308 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 11:31:43.650407  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:43.652705  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.653134  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.653162  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.653304  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:43.653501  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.653653  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.653797  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:43.654047  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:43.654282  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:43.654292  121308 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 11:31:43.754854  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:31:43.754882  121308 main.go:141] libmachine: Detecting the provisioner...
	I0819 11:31:43.754917  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:43.757738  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.758200  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.758232  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.758445  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:43.758674  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.758866  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.759011  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:43.759163  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:43.759354  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:43.759368  121308 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 11:31:43.860452  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 11:31:43.860549  121308 main.go:141] libmachine: found compatible host: buildroot
	I0819 11:31:43.860559  121308 main.go:141] libmachine: Provisioning with buildroot...
	I0819 11:31:43.860567  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetMachineName
	I0819 11:31:43.860864  121308 buildroot.go:166] provisioning hostname "ha-503856-m03"
	I0819 11:31:43.860889  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetMachineName
	I0819 11:31:43.861094  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:43.863700  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.864053  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.864088  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.864221  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:43.864400  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.864595  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.864699  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:43.864833  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:43.865008  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:43.865023  121308 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-503856-m03 && echo "ha-503856-m03" | sudo tee /etc/hostname
	I0819 11:31:43.983047  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856-m03
	
	I0819 11:31:43.983077  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:43.985980  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.986316  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:43.986342  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:43.986545  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:43.986757  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.986901  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:43.987003  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:43.987127  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:43.987343  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:43.987363  121308 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-503856-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-503856-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-503856-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:31:44.096697  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:31:44.096762  121308 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 11:31:44.096786  121308 buildroot.go:174] setting up certificates
	I0819 11:31:44.096797  121308 provision.go:84] configureAuth start
	I0819 11:31:44.096811  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetMachineName
	I0819 11:31:44.097152  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:31:44.099996  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.100366  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.100393  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.100542  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.102766  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.103244  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.103271  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.103409  121308 provision.go:143] copyHostCerts
	I0819 11:31:44.103453  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:31:44.103492  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 11:31:44.103508  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:31:44.103572  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 11:31:44.103643  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:31:44.103664  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 11:31:44.103671  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:31:44.103694  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 11:31:44.103762  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:31:44.103784  121308 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 11:31:44.103790  121308 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:31:44.103814  121308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 11:31:44.103863  121308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.ha-503856-m03 san=[127.0.0.1 192.168.39.122 ha-503856-m03 localhost minikube]
	I0819 11:31:44.342828  121308 provision.go:177] copyRemoteCerts
	I0819 11:31:44.342889  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:31:44.342928  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.345724  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.346012  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.346037  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.346251  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:44.346456  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.346690  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:44.346823  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:31:44.426457  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 11:31:44.426546  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:31:44.450836  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 11:31:44.450920  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:31:44.475298  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 11:31:44.475386  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 11:31:44.499633  121308 provision.go:87] duration metric: took 402.822967ms to configureAuth
	I0819 11:31:44.499663  121308 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:31:44.499908  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:31:44.499995  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.502493  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.502894  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.502923  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.503087  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:44.503288  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.503478  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.503639  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:44.503836  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:44.504001  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:44.504015  121308 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:31:44.759869  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:31:44.759904  121308 main.go:141] libmachine: Checking connection to Docker...
	I0819 11:31:44.759913  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetURL
	I0819 11:31:44.761138  121308 main.go:141] libmachine: (ha-503856-m03) DBG | Using libvirt version 6000000
	I0819 11:31:44.762898  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.763223  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.763256  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.763359  121308 main.go:141] libmachine: Docker is up and running!
	I0819 11:31:44.763388  121308 main.go:141] libmachine: Reticulating splines...
	I0819 11:31:44.763401  121308 client.go:171] duration metric: took 24.580080005s to LocalClient.Create
	I0819 11:31:44.763429  121308 start.go:167] duration metric: took 24.580158524s to libmachine.API.Create "ha-503856"
	I0819 11:31:44.763441  121308 start.go:293] postStartSetup for "ha-503856-m03" (driver="kvm2")
	I0819 11:31:44.763459  121308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:31:44.763483  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:44.763770  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:31:44.763800  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.765581  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.765834  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.765863  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.766016  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:44.766214  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.766381  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:44.766543  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:31:44.845814  121308 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:31:44.850310  121308 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:31:44.850345  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 11:31:44.850422  121308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 11:31:44.850499  121308 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 11:31:44.850506  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 11:31:44.850587  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:31:44.859846  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:31:44.883965  121308 start.go:296] duration metric: took 120.503585ms for postStartSetup
	I0819 11:31:44.884033  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetConfigRaw
	I0819 11:31:44.884659  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:31:44.887017  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.887332  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.887356  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.887642  121308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:31:44.887891  121308 start.go:128] duration metric: took 24.724548392s to createHost
	I0819 11:31:44.887916  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:44.890207  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.890543  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.890568  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.890750  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:44.890979  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.891181  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:44.891345  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:44.891520  121308 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:44.891681  121308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0819 11:31:44.891692  121308 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:31:44.992295  121308 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724067104.969501066
	
	I0819 11:31:44.992331  121308 fix.go:216] guest clock: 1724067104.969501066
	I0819 11:31:44.992344  121308 fix.go:229] Guest: 2024-08-19 11:31:44.969501066 +0000 UTC Remote: 2024-08-19 11:31:44.887905044 +0000 UTC m=+139.901267068 (delta=81.596022ms)
	I0819 11:31:44.992374  121308 fix.go:200] guest clock delta is within tolerance: 81.596022ms
	I0819 11:31:44.992383  121308 start.go:83] releasing machines lock for "ha-503856-m03", held for 24.829158862s
	I0819 11:31:44.992415  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:44.992730  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:31:44.995088  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.995478  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:44.995506  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:44.997334  121308 out.go:177] * Found network options:
	I0819 11:31:44.998720  121308 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.183
	W0819 11:31:44.999907  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 11:31:44.999934  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 11:31:44.999950  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:45.000567  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:45.000777  121308 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:31:45.000881  121308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:31:45.000921  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	W0819 11:31:45.001182  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 11:31:45.001204  121308 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 11:31:45.001264  121308 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:31:45.001284  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:31:45.003845  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:45.004121  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:45.004149  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:45.004171  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:45.004421  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:45.004637  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:45.004661  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:45.004669  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:45.004782  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:31:45.004868  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:45.004963  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:31:45.005023  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:31:45.005056  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:31:45.005149  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:31:45.236046  121308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:31:45.241806  121308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:31:45.241884  121308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:31:45.257689  121308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:31:45.257720  121308 start.go:495] detecting cgroup driver to use...
	I0819 11:31:45.257795  121308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:31:45.273519  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:31:45.287545  121308 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:31:45.287609  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:31:45.301536  121308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:31:45.316644  121308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:31:45.427352  121308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:31:45.591657  121308 docker.go:233] disabling docker service ...
	I0819 11:31:45.591772  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:31:45.607168  121308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:31:45.620964  121308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:31:45.745004  121308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:31:45.882334  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:31:45.897050  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:31:45.916092  121308 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:31:45.916152  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.927078  121308 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:31:45.927150  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.938148  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.949598  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.961672  121308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:31:45.973479  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:45.984953  121308 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:46.004406  121308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:31:46.015695  121308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:31:46.026039  121308 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 11:31:46.026105  121308 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 11:31:46.040369  121308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:31:46.050909  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:31:46.170079  121308 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:31:46.299697  121308 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:31:46.299812  121308 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:31:46.304745  121308 start.go:563] Will wait 60s for crictl version
	I0819 11:31:46.304806  121308 ssh_runner.go:195] Run: which crictl
	I0819 11:31:46.308508  121308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:31:46.349022  121308 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:31:46.349120  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:31:46.377230  121308 ssh_runner.go:195] Run: crio --version
	I0819 11:31:46.409263  121308 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:31:46.410757  121308 out.go:177]   - env NO_PROXY=192.168.39.102
	I0819 11:31:46.412215  121308 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.183
	I0819 11:31:46.413489  121308 main.go:141] libmachine: (ha-503856-m03) Calling .GetIP
	I0819 11:31:46.416093  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:46.416513  121308 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:31:46.416546  121308 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:31:46.416763  121308 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:31:46.420934  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:31:46.433578  121308 mustload.go:65] Loading cluster: ha-503856
	I0819 11:31:46.433821  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:31:46.434096  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:31:46.434146  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:31:46.449241  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42613
	I0819 11:31:46.449690  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:31:46.450172  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:31:46.450195  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:31:46.450552  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:31:46.450766  121308 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:31:46.452298  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:31:46.452612  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:31:46.452652  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:31:46.467366  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0819 11:31:46.467911  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:31:46.468346  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:31:46.468368  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:31:46.468695  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:31:46.468887  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:31:46.469063  121308 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856 for IP: 192.168.39.122
	I0819 11:31:46.469075  121308 certs.go:194] generating shared ca certs ...
	I0819 11:31:46.469096  121308 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:46.469240  121308 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 11:31:46.469292  121308 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 11:31:46.469306  121308 certs.go:256] generating profile certs ...
	I0819 11:31:46.469396  121308 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key
	I0819 11:31:46.469428  121308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.fb95d417
	I0819 11:31:46.469449  121308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.fb95d417 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.183 192.168.39.122 192.168.39.254]
	I0819 11:31:46.527356  121308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.fb95d417 ...
	I0819 11:31:46.527391  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.fb95d417: {Name:mk011e7a84b72a1279839beb66c759312559f7e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:46.527581  121308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.fb95d417 ...
	I0819 11:31:46.527600  121308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.fb95d417: {Name:mk8decfc934a051f761e55204e12c6734d163b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:46.527698  121308 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.fb95d417 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt
	I0819 11:31:46.527878  121308 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.fb95d417 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key
	I0819 11:31:46.528043  121308 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key
	I0819 11:31:46.528063  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 11:31:46.528083  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 11:31:46.528100  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 11:31:46.528121  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 11:31:46.528140  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 11:31:46.528161  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 11:31:46.528180  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 11:31:46.528199  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 11:31:46.528267  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 11:31:46.528307  121308 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 11:31:46.528321  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:31:46.528366  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:31:46.528399  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:31:46.528434  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 11:31:46.528490  121308 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:31:46.528528  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:31:46.528548  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 11:31:46.528564  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 11:31:46.528608  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:31:46.531806  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:31:46.532332  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:31:46.532358  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:31:46.532560  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:31:46.532781  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:31:46.532939  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:31:46.533073  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:31:46.608111  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 11:31:46.612853  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 11:31:46.625919  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 11:31:46.630396  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 11:31:46.640885  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 11:31:46.645365  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 11:31:46.655993  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 11:31:46.660112  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 11:31:46.672626  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 11:31:46.677154  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 11:31:46.689265  121308 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 11:31:46.693890  121308 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 11:31:46.705709  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:31:46.731797  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:31:46.755443  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:31:46.779069  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:31:46.803408  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 11:31:46.826941  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:31:46.851156  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:31:46.875085  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:31:46.900780  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:31:46.924779  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 11:31:46.948671  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 11:31:46.973787  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 11:31:46.990188  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 11:31:47.007593  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 11:31:47.025567  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 11:31:47.042073  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 11:31:47.058650  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 11:31:47.075435  121308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 11:31:47.092304  121308 ssh_runner.go:195] Run: openssl version
	I0819 11:31:47.098008  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:31:47.108795  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:31:47.113417  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:31:47.113487  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:31:47.119331  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:31:47.130238  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 11:31:47.141134  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 11:31:47.146656  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 11:31:47.146727  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 11:31:47.153019  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 11:31:47.164063  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 11:31:47.174821  121308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 11:31:47.179154  121308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 11:31:47.179226  121308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 11:31:47.185015  121308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:31:47.198127  121308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:31:47.202402  121308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:31:47.202492  121308 kubeadm.go:934] updating node {m03 192.168.39.122 8443 v1.31.0 crio true true} ...
	I0819 11:31:47.202591  121308 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-503856-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:31:47.202616  121308 kube-vip.go:115] generating kube-vip config ...
	I0819 11:31:47.202656  121308 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 11:31:47.219835  121308 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 11:31:47.220006  121308 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 11:31:47.220093  121308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:31:47.229761  121308 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 11:31:47.229855  121308 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 11:31:47.239266  121308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 11:31:47.239298  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 11:31:47.239347  121308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 11:31:47.239361  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 11:31:47.239369  121308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 11:31:47.239377  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 11:31:47.239423  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:31:47.239447  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 11:31:47.249350  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 11:31:47.249392  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 11:31:47.262303  121308 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 11:31:47.262362  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 11:31:47.262396  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 11:31:47.262432  121308 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 11:31:47.321220  121308 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 11:31:47.321263  121308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 11:31:48.135290  121308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 11:31:48.144839  121308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 11:31:48.161866  121308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:31:48.178862  121308 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 11:31:48.196145  121308 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 11:31:48.200207  121308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:31:48.212241  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:31:48.331898  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:31:48.356030  121308 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:31:48.356580  121308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:31:48.356641  121308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:31:48.373578  121308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0819 11:31:48.374156  121308 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:31:48.374708  121308 main.go:141] libmachine: Using API Version  1
	I0819 11:31:48.374740  121308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:31:48.375075  121308 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:31:48.375267  121308 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:31:48.375400  121308 start.go:317] joinCluster: &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:31:48.375556  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 11:31:48.375574  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:31:48.378704  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:31:48.379181  121308 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:31:48.379212  121308 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:31:48.379399  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:31:48.379598  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:31:48.379770  121308 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:31:48.379918  121308 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:31:48.518272  121308 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:31:48.518338  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9g1dct.m32llnhjbnztl8nq --discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-503856-m03 --control-plane --apiserver-advertise-address=192.168.39.122 --apiserver-bind-port=8443"
	I0819 11:32:09.865314  121308 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9g1dct.m32llnhjbnztl8nq --discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-503856-m03 --control-plane --apiserver-advertise-address=192.168.39.122 --apiserver-bind-port=8443": (21.346937303s)
	I0819 11:32:09.865368  121308 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 11:32:10.369786  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-503856-m03 minikube.k8s.io/updated_at=2024_08_19T11_32_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=ha-503856 minikube.k8s.io/primary=false
	I0819 11:32:10.496237  121308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-503856-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 11:32:10.601157  121308 start.go:319] duration metric: took 22.225751351s to joinCluster
	I0819 11:32:10.601245  121308 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 11:32:10.601611  121308 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:32:10.603100  121308 out.go:177] * Verifying Kubernetes components...
	I0819 11:32:10.604140  121308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:32:10.877173  121308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:32:10.909687  121308 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:32:10.909986  121308 kapi.go:59] client config for ha-503856: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.crt", KeyFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key", CAFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 11:32:10.910062  121308 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0819 11:32:10.910318  121308 node_ready.go:35] waiting up to 6m0s for node "ha-503856-m03" to be "Ready" ...
	I0819 11:32:10.910415  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:10.910424  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:10.910434  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:10.910445  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:10.914305  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:11.411478  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:11.411506  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:11.411517  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:11.411526  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:11.415876  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:11.910593  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:11.910621  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:11.910632  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:11.910639  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:11.914646  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:12.411531  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:12.411558  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:12.411570  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:12.411576  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:12.417289  121308 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 11:32:12.910720  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:12.910748  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:12.910763  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:12.910769  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:12.913724  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:12.914406  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:13.410815  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:13.410843  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:13.410854  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:13.410859  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:13.415382  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:13.911132  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:13.911161  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:13.911173  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:13.911181  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:13.914748  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:14.410563  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:14.410589  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:14.410599  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:14.410605  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:14.416656  121308 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 11:32:14.910651  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:14.910682  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:14.910693  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:14.910702  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:14.914226  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:14.914790  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:15.411431  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:15.411455  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:15.411464  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:15.411472  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:15.417235  121308 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 11:32:15.911416  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:15.911438  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:15.911447  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:15.911452  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:15.914764  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:16.410694  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:16.410720  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:16.410732  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:16.410745  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:16.416771  121308 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 11:32:16.911232  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:16.911258  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:16.911266  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:16.911271  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:16.914931  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:16.915536  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:17.410952  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:17.410974  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:17.410983  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:17.410987  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:17.414414  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:17.910675  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:17.910697  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:17.910706  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:17.910709  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:17.914143  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:18.410886  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:18.410920  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:18.410930  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:18.410936  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:18.415313  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:18.911459  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:18.911495  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:18.911505  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:18.911509  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:18.915032  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:18.915772  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:19.411110  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:19.411133  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:19.411143  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:19.411148  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:19.414481  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:19.911465  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:19.911490  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:19.911501  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:19.911507  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:19.915808  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:20.411079  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:20.411104  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:20.411113  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:20.411117  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:20.415246  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:20.911185  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:20.911213  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:20.911224  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:20.911230  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:20.914330  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:21.410966  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:21.410991  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:21.411007  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:21.411012  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:21.414226  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:21.414713  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:21.911097  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:21.911140  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:21.911149  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:21.911153  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:21.914344  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:22.411221  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:22.411250  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:22.411259  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:22.411264  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:22.415201  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:22.910568  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:22.910593  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:22.910602  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:22.910606  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:22.913979  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:23.410772  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:23.410796  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:23.410805  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:23.410809  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:23.414175  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:23.414786  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:23.911039  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:23.911064  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:23.911076  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:23.911085  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:23.913720  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:24.410570  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:24.410600  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:24.410611  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:24.410617  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:24.415505  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:24.910629  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:24.910662  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:24.910671  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:24.910677  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:24.914338  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:25.410692  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:25.410716  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:25.410725  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:25.410729  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:25.414446  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:25.415093  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:25.911409  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:25.911431  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:25.911439  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:25.911443  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:25.914958  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:26.411578  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:26.411602  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:26.411610  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:26.411615  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:26.415244  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:26.911151  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:26.911178  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:26.911188  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:26.911203  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:26.914741  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:27.411029  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:27.411053  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:27.411062  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:27.411068  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:27.414377  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:27.910808  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:27.910834  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:27.910845  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:27.910851  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:27.914609  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:27.915274  121308 node_ready.go:53] node "ha-503856-m03" has status "Ready":"False"
	I0819 11:32:28.410799  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:28.410824  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.410832  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.410838  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.413934  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.910953  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:28.910979  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.910990  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.910996  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.914995  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.915716  121308 node_ready.go:49] node "ha-503856-m03" has status "Ready":"True"
	I0819 11:32:28.915755  121308 node_ready.go:38] duration metric: took 18.005420591s for node "ha-503856-m03" to be "Ready" ...
	I0819 11:32:28.915771  121308 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:32:28.915849  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:28.915862  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.915873  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.915883  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.924660  121308 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 11:32:28.933241  121308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.933375  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2jdlw
	I0819 11:32:28.933389  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.933400  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.933408  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.938241  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:28.938927  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:28.938946  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.938954  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.938959  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.942343  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.942928  121308 pod_ready.go:93] pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:28.942947  121308 pod_ready.go:82] duration metric: took 9.662876ms for pod "coredns-6f6b679f8f-2jdlw" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.942959  121308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.943027  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-5dbrz
	I0819 11:32:28.943036  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.943045  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.943052  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.946195  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.947301  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:28.947320  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.947328  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.947331  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.950305  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:28.951142  121308 pod_ready.go:93] pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:28.951162  121308 pod_ready.go:82] duration metric: took 8.195322ms for pod "coredns-6f6b679f8f-5dbrz" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.951172  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.951246  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856
	I0819 11:32:28.951256  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.951266  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.951273  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.953998  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:28.954637  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:28.954653  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.954677  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.954684  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.960807  121308 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 11:32:28.961306  121308 pod_ready.go:93] pod "etcd-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:28.961327  121308 pod_ready.go:82] duration metric: took 10.149483ms for pod "etcd-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.961337  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.961403  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856-m02
	I0819 11:32:28.961409  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.961417  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.961424  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.964967  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:28.965819  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:28.965835  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:28.965846  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:28.965850  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:28.968576  121308 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 11:32:28.969109  121308 pod_ready.go:93] pod "etcd-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:28.969129  121308 pod_ready.go:82] duration metric: took 7.781053ms for pod "etcd-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:28.969139  121308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.111548  121308 request.go:632] Waited for 142.335527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856-m03
	I0819 11:32:29.111636  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-503856-m03
	I0819 11:32:29.111661  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.111676  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.111684  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.115707  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:29.311960  121308 request.go:632] Waited for 195.380175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:29.312028  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:29.312036  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.312047  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.312057  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.315622  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:29.316104  121308 pod_ready.go:93] pod "etcd-ha-503856-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:29.316128  121308 pod_ready.go:82] duration metric: took 346.980355ms for pod "etcd-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.316146  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.511229  121308 request.go:632] Waited for 195.001883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856
	I0819 11:32:29.511293  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856
	I0819 11:32:29.511300  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.511307  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.511317  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.514586  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:29.711790  121308 request.go:632] Waited for 196.451519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:29.711891  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:29.711900  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.711908  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.711912  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.716113  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:29.716932  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:29.716951  121308 pod_ready.go:82] duration metric: took 400.798611ms for pod "kube-apiserver-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.716961  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:29.911080  121308 request.go:632] Waited for 194.03651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m02
	I0819 11:32:29.911189  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m02
	I0819 11:32:29.911211  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:29.911219  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:29.911224  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:29.914605  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.111766  121308 request.go:632] Waited for 196.114055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:30.111831  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:30.111837  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.111845  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.111850  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.115295  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.115935  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:30.115955  121308 pod_ready.go:82] duration metric: took 398.985634ms for pod "kube-apiserver-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.115965  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.311084  121308 request.go:632] Waited for 195.040261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m03
	I0819 11:32:30.311168  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-503856-m03
	I0819 11:32:30.311174  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.311181  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.311186  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.314832  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.511950  121308 request.go:632] Waited for 196.362241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:30.512008  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:30.512013  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.512021  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.512025  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.515260  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.515826  121308 pod_ready.go:93] pod "kube-apiserver-ha-503856-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:30.515848  121308 pod_ready.go:82] duration metric: took 399.875288ms for pod "kube-apiserver-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.515862  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.712012  121308 request.go:632] Waited for 196.07124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856
	I0819 11:32:30.712121  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856
	I0819 11:32:30.712132  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.712145  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.712155  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.715522  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.911606  121308 request.go:632] Waited for 195.377819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:30.911698  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:30.911704  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:30.911711  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:30.911720  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:30.915054  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:30.915831  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:30.915853  121308 pod_ready.go:82] duration metric: took 399.983431ms for pod "kube-controller-manager-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:30.915864  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.111974  121308 request.go:632] Waited for 196.007678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m02
	I0819 11:32:31.112032  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m02
	I0819 11:32:31.112038  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.112046  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.112051  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.115564  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:31.311809  121308 request.go:632] Waited for 195.413562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:31.311879  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:31.311886  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.311898  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.311906  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.315398  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:31.316219  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:31.316240  121308 pod_ready.go:82] duration metric: took 400.370818ms for pod "kube-controller-manager-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.316250  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.511393  121308 request.go:632] Waited for 195.036798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m03
	I0819 11:32:31.511463  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-503856-m03
	I0819 11:32:31.511471  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.511484  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.511490  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.515100  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:31.711211  121308 request.go:632] Waited for 195.29388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:31.711301  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:31.711312  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.711324  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.711332  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.714829  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:31.715366  121308 pod_ready.go:93] pod "kube-controller-manager-ha-503856-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:31.715390  121308 pod_ready.go:82] duration metric: took 399.13227ms for pod "kube-controller-manager-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.715403  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xzr9" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:31.911422  121308 request.go:632] Waited for 195.934341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xzr9
	I0819 11:32:31.911481  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xzr9
	I0819 11:32:31.911488  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:31.911496  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:31.911501  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:31.914817  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.111836  121308 request.go:632] Waited for 196.351993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:32.111924  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:32.111933  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.111946  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.111954  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.115286  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.115869  121308 pod_ready.go:93] pod "kube-proxy-8xzr9" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:32.115888  121308 pod_ready.go:82] duration metric: took 400.478685ms for pod "kube-proxy-8xzr9" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.115901  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d6zw9" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.310981  121308 request.go:632] Waited for 194.990168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6zw9
	I0819 11:32:32.311053  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6zw9
	I0819 11:32:32.311060  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.311068  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.311075  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.314660  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.511671  121308 request.go:632] Waited for 196.349477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:32.511741  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:32.511749  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.511760  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.511766  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.515260  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.515890  121308 pod_ready.go:93] pod "kube-proxy-d6zw9" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:32.515912  121308 pod_ready.go:82] duration metric: took 400.003811ms for pod "kube-proxy-d6zw9" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.515922  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j2f6h" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.711443  121308 request.go:632] Waited for 195.447544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2f6h
	I0819 11:32:32.711526  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2f6h
	I0819 11:32:32.711533  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.711553  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.711577  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.715028  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.911309  121308 request.go:632] Waited for 195.361052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:32.911402  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:32.911416  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:32.911429  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:32.911438  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:32.914872  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:32.915563  121308 pod_ready.go:93] pod "kube-proxy-j2f6h" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:32.915584  121308 pod_ready.go:82] duration metric: took 399.655981ms for pod "kube-proxy-j2f6h" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:32.915598  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.111779  121308 request.go:632] Waited for 196.080229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856
	I0819 11:32:33.111840  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856
	I0819 11:32:33.111845  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.111852  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.111856  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.115006  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:33.312046  121308 request.go:632] Waited for 196.455562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:33.312120  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856
	I0819 11:32:33.312128  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.312139  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.312149  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.315807  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:33.316327  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:33.316347  121308 pod_ready.go:82] duration metric: took 400.741583ms for pod "kube-scheduler-ha-503856" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.316358  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.511935  121308 request.go:632] Waited for 195.48573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m02
	I0819 11:32:33.512010  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m02
	I0819 11:32:33.512019  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.512027  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.512033  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.515400  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:33.712035  121308 request.go:632] Waited for 195.865929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:33.712099  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m02
	I0819 11:32:33.712111  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.712122  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.712130  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.715572  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:33.716554  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:33.716573  121308 pod_ready.go:82] duration metric: took 400.209171ms for pod "kube-scheduler-ha-503856-m02" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.716583  121308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:33.911698  121308 request.go:632] Waited for 195.027976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m03
	I0819 11:32:33.911791  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-503856-m03
	I0819 11:32:33.911800  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:33.911811  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:33.911821  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:33.915154  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:34.111126  121308 request.go:632] Waited for 195.328636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:34.111226  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-503856-m03
	I0819 11:32:34.111234  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.111243  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.111251  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.115781  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:34.116320  121308 pod_ready.go:93] pod "kube-scheduler-ha-503856-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:34.116341  121308 pod_ready.go:82] duration metric: took 399.750695ms for pod "kube-scheduler-ha-503856-m03" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:34.116353  121308 pod_ready.go:39] duration metric: took 5.200563994s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:32:34.116367  121308 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:32:34.116436  121308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:32:34.131085  121308 api_server.go:72] duration metric: took 23.529785868s to wait for apiserver process to appear ...
	I0819 11:32:34.131122  121308 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:32:34.131146  121308 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0819 11:32:34.138628  121308 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0819 11:32:34.138734  121308 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0819 11:32:34.138748  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.138759  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.138767  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.139756  121308 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 11:32:34.139832  121308 api_server.go:141] control plane version: v1.31.0
	I0819 11:32:34.139848  121308 api_server.go:131] duration metric: took 8.718688ms to wait for apiserver health ...
	I0819 11:32:34.139859  121308 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 11:32:34.311012  121308 request.go:632] Waited for 171.070779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:34.311097  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:34.311105  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.311115  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.311124  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.318533  121308 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 11:32:34.326586  121308 system_pods.go:59] 24 kube-system pods found
	I0819 11:32:34.326624  121308 system_pods.go:61] "coredns-6f6b679f8f-2jdlw" [ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd] Running
	I0819 11:32:34.326631  121308 system_pods.go:61] "coredns-6f6b679f8f-5dbrz" [5530828e-1061-434c-ad2f-80847f3ab171] Running
	I0819 11:32:34.326637  121308 system_pods.go:61] "etcd-ha-503856" [b8932b07-bc71-4d14-bc4c-a323aa900891] Running
	I0819 11:32:34.326642  121308 system_pods.go:61] "etcd-ha-503856-m02" [7c495867-e51d-4100-b0d8-2794e45a18c4] Running
	I0819 11:32:34.326647  121308 system_pods.go:61] "etcd-ha-503856-m03" [8a5f4851-a71f-4491-916b-f5b75929b327] Running
	I0819 11:32:34.326651  121308 system_pods.go:61] "kindnet-hvszk" [5484350e-fd9c-4901-984b-05f77e1d20ba] Running
	I0819 11:32:34.326655  121308 system_pods.go:61] "kindnet-rnjwj" [1a6e4b0d-f3f2-45e3-b66e-b0457ba61723] Running
	I0819 11:32:34.326660  121308 system_pods.go:61] "kindnet-st2mx" [99e7c93b-40a9-4902-b1a5-5a6bcc55735c] Running
	I0819 11:32:34.326664  121308 system_pods.go:61] "kube-apiserver-ha-503856" [bdea9580-2d12-4e91-acbd-5a5e08f5637c] Running
	I0819 11:32:34.326669  121308 system_pods.go:61] "kube-apiserver-ha-503856-m02" [a1d5950d-50bc-42e8-b432-27425aa4b80d] Running
	I0819 11:32:34.326674  121308 system_pods.go:61] "kube-apiserver-ha-503856-m03" [92a576da-58d9-42cf-90ed-c82f208e060f] Running
	I0819 11:32:34.326687  121308 system_pods.go:61] "kube-controller-manager-ha-503856" [36c9c0c5-0b9e-4fce-a34f-bf1c21590af4] Running
	I0819 11:32:34.326694  121308 system_pods.go:61] "kube-controller-manager-ha-503856-m02" [a58cf93b-47a4-4cb7-80e1-afb525b1a2b2] Running
	I0819 11:32:34.326699  121308 system_pods.go:61] "kube-controller-manager-ha-503856-m03" [f0ee565d-81f7-4f17-9e58-8d79f5defda6] Running
	I0819 11:32:34.326705  121308 system_pods.go:61] "kube-proxy-8xzr9" [436c9779-87db-44f7-9650-7e4b5431fbed] Running
	I0819 11:32:34.326711  121308 system_pods.go:61] "kube-proxy-d6zw9" [f8054009-c06a-4ccc-b6c4-22e0f6bb632a] Running
	I0819 11:32:34.326720  121308 system_pods.go:61] "kube-proxy-j2f6h" [e9623c18-7b96-49b5-8cc6-6ea700eec47e] Running
	I0819 11:32:34.326726  121308 system_pods.go:61] "kube-scheduler-ha-503856" [2c8c7e78-ded0-47ff-8720-b1c36c9123c6] Running
	I0819 11:32:34.326732  121308 system_pods.go:61] "kube-scheduler-ha-503856-m02" [6f51735c-0f3e-49f8-aff7-c6c485e0e653] Running
	I0819 11:32:34.326738  121308 system_pods.go:61] "kube-scheduler-ha-503856-m03" [afad7788-d0c7-4959-91b5-209ced760d93] Running
	I0819 11:32:34.326743  121308 system_pods.go:61] "kube-vip-ha-503856" [a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8] Running
	I0819 11:32:34.326749  121308 system_pods.go:61] "kube-vip-ha-503856-m02" [5d66ea23-6878-403f-88df-94bf42ad5800] Running
	I0819 11:32:34.326754  121308 system_pods.go:61] "kube-vip-ha-503856-m03" [4d116083-4440-468e-ad2d-1364e601db1e] Running
	I0819 11:32:34.326767  121308 system_pods.go:61] "storage-provisioner" [4c212413-ac90-45fb-92de-bfd9e9115540] Running
	I0819 11:32:34.326779  121308 system_pods.go:74] duration metric: took 186.910185ms to wait for pod list to return data ...
	I0819 11:32:34.326790  121308 default_sa.go:34] waiting for default service account to be created ...
	I0819 11:32:34.511195  121308 request.go:632] Waited for 184.309012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0819 11:32:34.511255  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0819 11:32:34.511261  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.511271  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.511278  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.514975  121308 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 11:32:34.515093  121308 default_sa.go:45] found service account: "default"
	I0819 11:32:34.515108  121308 default_sa.go:55] duration metric: took 188.308694ms for default service account to be created ...
	I0819 11:32:34.515117  121308 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 11:32:34.711477  121308 request.go:632] Waited for 196.27503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:34.711569  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0819 11:32:34.711582  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.711590  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.711596  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.717916  121308 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 11:32:34.728200  121308 system_pods.go:86] 24 kube-system pods found
	I0819 11:32:34.728231  121308 system_pods.go:89] "coredns-6f6b679f8f-2jdlw" [ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd] Running
	I0819 11:32:34.728237  121308 system_pods.go:89] "coredns-6f6b679f8f-5dbrz" [5530828e-1061-434c-ad2f-80847f3ab171] Running
	I0819 11:32:34.728242  121308 system_pods.go:89] "etcd-ha-503856" [b8932b07-bc71-4d14-bc4c-a323aa900891] Running
	I0819 11:32:34.728246  121308 system_pods.go:89] "etcd-ha-503856-m02" [7c495867-e51d-4100-b0d8-2794e45a18c4] Running
	I0819 11:32:34.728249  121308 system_pods.go:89] "etcd-ha-503856-m03" [8a5f4851-a71f-4491-916b-f5b75929b327] Running
	I0819 11:32:34.728253  121308 system_pods.go:89] "kindnet-hvszk" [5484350e-fd9c-4901-984b-05f77e1d20ba] Running
	I0819 11:32:34.728256  121308 system_pods.go:89] "kindnet-rnjwj" [1a6e4b0d-f3f2-45e3-b66e-b0457ba61723] Running
	I0819 11:32:34.728259  121308 system_pods.go:89] "kindnet-st2mx" [99e7c93b-40a9-4902-b1a5-5a6bcc55735c] Running
	I0819 11:32:34.728262  121308 system_pods.go:89] "kube-apiserver-ha-503856" [bdea9580-2d12-4e91-acbd-5a5e08f5637c] Running
	I0819 11:32:34.728266  121308 system_pods.go:89] "kube-apiserver-ha-503856-m02" [a1d5950d-50bc-42e8-b432-27425aa4b80d] Running
	I0819 11:32:34.728269  121308 system_pods.go:89] "kube-apiserver-ha-503856-m03" [92a576da-58d9-42cf-90ed-c82f208e060f] Running
	I0819 11:32:34.728273  121308 system_pods.go:89] "kube-controller-manager-ha-503856" [36c9c0c5-0b9e-4fce-a34f-bf1c21590af4] Running
	I0819 11:32:34.728276  121308 system_pods.go:89] "kube-controller-manager-ha-503856-m02" [a58cf93b-47a4-4cb7-80e1-afb525b1a2b2] Running
	I0819 11:32:34.728280  121308 system_pods.go:89] "kube-controller-manager-ha-503856-m03" [f0ee565d-81f7-4f17-9e58-8d79f5defda6] Running
	I0819 11:32:34.728282  121308 system_pods.go:89] "kube-proxy-8xzr9" [436c9779-87db-44f7-9650-7e4b5431fbed] Running
	I0819 11:32:34.728285  121308 system_pods.go:89] "kube-proxy-d6zw9" [f8054009-c06a-4ccc-b6c4-22e0f6bb632a] Running
	I0819 11:32:34.728289  121308 system_pods.go:89] "kube-proxy-j2f6h" [e9623c18-7b96-49b5-8cc6-6ea700eec47e] Running
	I0819 11:32:34.728292  121308 system_pods.go:89] "kube-scheduler-ha-503856" [2c8c7e78-ded0-47ff-8720-b1c36c9123c6] Running
	I0819 11:32:34.728295  121308 system_pods.go:89] "kube-scheduler-ha-503856-m02" [6f51735c-0f3e-49f8-aff7-c6c485e0e653] Running
	I0819 11:32:34.728298  121308 system_pods.go:89] "kube-scheduler-ha-503856-m03" [afad7788-d0c7-4959-91b5-209ced760d93] Running
	I0819 11:32:34.728302  121308 system_pods.go:89] "kube-vip-ha-503856" [a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8] Running
	I0819 11:32:34.728304  121308 system_pods.go:89] "kube-vip-ha-503856-m02" [5d66ea23-6878-403f-88df-94bf42ad5800] Running
	I0819 11:32:34.728307  121308 system_pods.go:89] "kube-vip-ha-503856-m03" [4d116083-4440-468e-ad2d-1364e601db1e] Running
	I0819 11:32:34.728310  121308 system_pods.go:89] "storage-provisioner" [4c212413-ac90-45fb-92de-bfd9e9115540] Running
	I0819 11:32:34.728317  121308 system_pods.go:126] duration metric: took 213.192293ms to wait for k8s-apps to be running ...
	I0819 11:32:34.728325  121308 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 11:32:34.728370  121308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:32:34.743448  121308 system_svc.go:56] duration metric: took 15.111773ms WaitForService to wait for kubelet
	I0819 11:32:34.743483  121308 kubeadm.go:582] duration metric: took 24.142193278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:32:34.743504  121308 node_conditions.go:102] verifying NodePressure condition ...
	I0819 11:32:34.911911  121308 request.go:632] Waited for 168.309732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0819 11:32:34.911988  121308 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0819 11:32:34.911994  121308 round_trippers.go:469] Request Headers:
	I0819 11:32:34.912002  121308 round_trippers.go:473]     Accept: application/json, */*
	I0819 11:32:34.912008  121308 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 11:32:34.916346  121308 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 11:32:34.917682  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:32:34.917705  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:32:34.917716  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:32:34.917719  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:32:34.917723  121308 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 11:32:34.917726  121308 node_conditions.go:123] node cpu capacity is 2
	I0819 11:32:34.917730  121308 node_conditions.go:105] duration metric: took 174.221965ms to run NodePressure ...
	I0819 11:32:34.917748  121308 start.go:241] waiting for startup goroutines ...
	I0819 11:32:34.917768  121308 start.go:255] writing updated cluster config ...
	I0819 11:32:34.918055  121308 ssh_runner.go:195] Run: rm -f paused
	I0819 11:32:34.969413  121308 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 11:32:34.972111  121308 out.go:177] * Done! kubectl is now configured to use "ha-503856" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.063670613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067435063648654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df048f8f-dfbe-406a-ade0-1e5ee88b40e2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.064155056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c62db34-86d4-484f-acfb-651883184d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.064215423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c62db34-86d4-484f-acfb-651883184d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.064450254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067158500890442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39,PodSandboxId:7074b09831f6bd3b03135218f0698131342183be25852fa7f92d1bd429ec790a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067024291050172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024223121643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024221336819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc
67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724067012619972475,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406700
8316850072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112,PodSandboxId:2c2e375766b14429ccbca66cbd90a4de54eadb91037b4cff34cc1cb046a93549,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406699967
8401575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243cc46027459bd9ae669bd4959ae8b2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724066997299379623,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724066997296906648,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e,PodSandboxId:6a5a214f4ecfbe6589eac54fcf3c31672cfe0befef185327158583c48ed17b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724066997259670096,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e,PodSandboxId:874ce4bf24c62441806a52c87405c4fc17310af6a25939f4fa49941f7e634a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724066997211052852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c62db34-86d4-484f-acfb-651883184d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.100538710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4338fad4-55ac-4fb1-a250-1cff115dc69b name=/runtime.v1.RuntimeService/Version
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.100612487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4338fad4-55ac-4fb1-a250-1cff115dc69b name=/runtime.v1.RuntimeService/Version
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.101605255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=166a8530-baf1-4bff-832a-2040388d8867 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.102039341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067435102015851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=166a8530-baf1-4bff-832a-2040388d8867 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.102622010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=518ebf9e-110b-4ce0-b533-ec5184c708dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.102686349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=518ebf9e-110b-4ce0-b533-ec5184c708dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.102919162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067158500890442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39,PodSandboxId:7074b09831f6bd3b03135218f0698131342183be25852fa7f92d1bd429ec790a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067024291050172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024223121643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024221336819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc
67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724067012619972475,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406700
8316850072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112,PodSandboxId:2c2e375766b14429ccbca66cbd90a4de54eadb91037b4cff34cc1cb046a93549,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406699967
8401575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243cc46027459bd9ae669bd4959ae8b2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724066997299379623,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724066997296906648,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e,PodSandboxId:6a5a214f4ecfbe6589eac54fcf3c31672cfe0befef185327158583c48ed17b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724066997259670096,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e,PodSandboxId:874ce4bf24c62441806a52c87405c4fc17310af6a25939f4fa49941f7e634a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724066997211052852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=518ebf9e-110b-4ce0-b533-ec5184c708dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.141225535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a52d1e75-4ef5-423e-afdc-849266a5d6e0 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.141299422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a52d1e75-4ef5-423e-afdc-849266a5d6e0 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.142760436Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c82baa6c-53a7-420d-982d-624028c20191 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.143262367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067435143237274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c82baa6c-53a7-420d-982d-624028c20191 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.143839393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae5af752-4ca1-4070-a42a-d9df4603c121 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.143894885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae5af752-4ca1-4070-a42a-d9df4603c121 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.144215044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067158500890442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39,PodSandboxId:7074b09831f6bd3b03135218f0698131342183be25852fa7f92d1bd429ec790a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067024291050172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024223121643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024221336819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc
67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724067012619972475,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406700
8316850072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112,PodSandboxId:2c2e375766b14429ccbca66cbd90a4de54eadb91037b4cff34cc1cb046a93549,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406699967
8401575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243cc46027459bd9ae669bd4959ae8b2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724066997299379623,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724066997296906648,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e,PodSandboxId:6a5a214f4ecfbe6589eac54fcf3c31672cfe0befef185327158583c48ed17b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724066997259670096,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e,PodSandboxId:874ce4bf24c62441806a52c87405c4fc17310af6a25939f4fa49941f7e634a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724066997211052852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae5af752-4ca1-4070-a42a-d9df4603c121 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.186527225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=510e9880-80bc-4d8d-a3e5-3da3c5c47ffd name=/runtime.v1.RuntimeService/Version
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.186600471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=510e9880-80bc-4d8d-a3e5-3da3c5c47ffd name=/runtime.v1.RuntimeService/Version
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.187735745Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d756dbfe-c891-4213-894f-9a44f1e94b82 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.188309285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067435188281902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d756dbfe-c891-4213-894f-9a44f1e94b82 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.188766414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1a70376-4db1-4819-9433-2108655d7961 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.188841664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1a70376-4db1-4819-9433-2108655d7961 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:37:15 ha-503856 crio[678]: time="2024-08-19 11:37:15.189124417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067158500890442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39,PodSandboxId:7074b09831f6bd3b03135218f0698131342183be25852fa7f92d1bd429ec790a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067024291050172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024223121643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067024221336819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc
67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724067012619972475,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172406700
8316850072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112,PodSandboxId:2c2e375766b14429ccbca66cbd90a4de54eadb91037b4cff34cc1cb046a93549,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172406699967
8401575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243cc46027459bd9ae669bd4959ae8b2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724066997299379623,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724066997296906648,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e,PodSandboxId:6a5a214f4ecfbe6589eac54fcf3c31672cfe0befef185327158583c48ed17b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724066997259670096,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e,PodSandboxId:874ce4bf24c62441806a52c87405c4fc17310af6a25939f4fa49941f7e634a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724066997211052852,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1a70376-4db1-4819-9433-2108655d7961 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56a5ad9cc18e7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   1191cb555eb55       busybox-7dff88458-7wpbx
	6c7867b6691ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   7074b09831f6b       storage-provisioner
	e67513ebd15d0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   13c07aa9a0025       coredns-6f6b679f8f-5dbrz
	8315e44800080       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0b0b0a070f3ec       coredns-6f6b679f8f-2jdlw
	1964134e9de80       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   9079c84056e4b       kindnet-st2mx
	68730d308f145       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   adace0914115c       kube-proxy-d6zw9
	11a47171a5438       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   2c2e375766b14       kube-vip-ha-503856
	ccea80d1a22a4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   982016c43ab0e       kube-scheduler-ha-503856
	3879d2de39f1c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   eb7c9eb1ba042       etcd-ha-503856
	c0a1ce45d7b78       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   6a5a214f4ecfb       kube-apiserver-ha-503856
	df01b4ed6011a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   874ce4bf24c62       kube-controller-manager-ha-503856
	
	
	==> coredns [8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464] <==
	[INFO] 10.244.0.4:53844 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014011s
	[INFO] 10.244.3.2:37901 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001741064s
	[INFO] 10.244.3.2:44495 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198321s
	[INFO] 10.244.3.2:59991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001295677s
	[INFO] 10.244.3.2:36199 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168276s
	[INFO] 10.244.3.2:56390 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118777s
	[INFO] 10.244.3.2:60188 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110134s
	[INFO] 10.244.1.2:48283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110043s
	[INFO] 10.244.1.2:47868 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001551069s
	[INFO] 10.244.1.2:40080 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132463s
	[INFO] 10.244.1.2:39365 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001154088s
	[INFO] 10.244.1.2:42435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074226s
	[INFO] 10.244.0.4:41562 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076296s
	[INFO] 10.244.0.4:56190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067218s
	[INFO] 10.244.3.2:36444 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119378s
	[INFO] 10.244.3.2:38880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151765s
	[INFO] 10.244.1.2:43281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016005s
	[INFO] 10.244.1.2:44768 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098293s
	[INFO] 10.244.0.4:42211 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129163s
	[INFO] 10.244.0.4:53178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082891s
	[INFO] 10.244.3.2:39486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118564s
	[INFO] 10.244.3.2:46262 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112723s
	[INFO] 10.244.3.2:50068 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106233s
	[INFO] 10.244.1.2:43781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134028s
	[INFO] 10.244.1.2:47607 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071487s
	
	
	==> coredns [e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de] <==
	[INFO] 10.244.1.2:45826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116203s
	[INFO] 10.244.1.2:51336 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000106263s
	[INFO] 10.244.1.2:52489 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001893734s
	[INFO] 10.244.0.4:58770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111417s
	[INFO] 10.244.0.4:32786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159712s
	[INFO] 10.244.0.4:34773 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133937s
	[INFO] 10.244.0.4:34211 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003320974s
	[INFO] 10.244.0.4:44413 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105874s
	[INFO] 10.244.0.4:37795 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067103s
	[INFO] 10.244.3.2:48365 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108129s
	[INFO] 10.244.3.2:35563 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101277s
	[INFO] 10.244.1.2:41209 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111152s
	[INFO] 10.244.1.2:59241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195927s
	[INFO] 10.244.1.2:32916 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097287s
	[INFO] 10.244.0.4:53548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104877s
	[INFO] 10.244.0.4:55650 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105726s
	[INFO] 10.244.3.2:40741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204087s
	[INFO] 10.244.3.2:41373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105987s
	[INFO] 10.244.1.2:57537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000193166s
	[INFO] 10.244.1.2:40497 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080869s
	[INFO] 10.244.0.4:33281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136165s
	[INFO] 10.244.0.4:49164 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000302537s
	[INFO] 10.244.3.2:54372 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157216s
	[INFO] 10.244.1.2:40968 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206142s
	[INFO] 10.244.1.2:54797 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102712s
	
	
	==> describe nodes <==
	Name:               ha-503856
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_30_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:30:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:37:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:33:09 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:33:09 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:33:09 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:33:09 +0000   Mon, 19 Aug 2024 11:30:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-503856
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebf7fa993760403a8b3080e5ea2bdf31
	  System UUID:                ebf7fa99-3760-403a-8b30-80e5ea2bdf31
	  Boot ID:                    f3b2611c-5dfd-45ef-8747-94b35364374b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7wpbx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-6f6b679f8f-2jdlw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m7s
	  kube-system                 coredns-6f6b679f8f-5dbrz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m7s
	  kube-system                 etcd-ha-503856                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m10s
	  kube-system                 kindnet-st2mx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m8s
	  kube-system                 kube-apiserver-ha-503856             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-controller-manager-ha-503856    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-proxy-d6zw9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-scheduler-ha-503856             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-vip-ha-503856                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m6s   kube-proxy       
	  Normal  Starting                 7m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m9s   kubelet          Node ha-503856 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m9s   kubelet          Node ha-503856 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m9s   kubelet          Node ha-503856 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m8s   node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal  NodeReady                6m52s  kubelet          Node ha-503856 status is now: NodeReady
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal  RegisteredNode           5m     node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	
	
	Name:               ha-503856-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_30_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:30:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:33:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 11:32:58 +0000   Mon, 19 Aug 2024 11:34:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 11:32:58 +0000   Mon, 19 Aug 2024 11:34:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 11:32:58 +0000   Mon, 19 Aug 2024 11:34:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 11:32:58 +0000   Mon, 19 Aug 2024 11:34:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-503856-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a5c9c65d0cb479397609eb1cad01b44
	  System UUID:                9a5c9c65-d0cb-4793-9760-9eb1cad01b44
	  Boot ID:                    c1b5d088-ad90-41a3-b25b-40f79fc85586
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nxhq6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 etcd-ha-503856-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m18s
	  kube-system                 kindnet-rnjwj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m20s
	  kube-system                 kube-apiserver-ha-503856-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-ha-503856-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-proxy-j2f6h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-scheduler-ha-503856-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-vip-ha-503856-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     6m20s                  cidrAllocator    Node ha-503856-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m20s)  kubelet          Node ha-503856-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m20s)  kubelet          Node ha-503856-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x7 over 6m20s)  kubelet          Node ha-503856-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           5m                     node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  NodeNotReady             2m45s                  node-controller  Node ha-503856-m02 status is now: NodeNotReady
	
	
	Name:               ha-503856-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_32_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:32:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:37:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:33:08 +0000   Mon, 19 Aug 2024 11:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:33:08 +0000   Mon, 19 Aug 2024 11:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:33:08 +0000   Mon, 19 Aug 2024 11:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:33:08 +0000   Mon, 19 Aug 2024 11:32:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    ha-503856-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d357d9a38d274836bfe734b86d4bde83
	  System UUID:                d357d9a3-8d27-4836-bfe7-34b86d4bde83
	  Boot ID:                    3304d774-5407-4f16-9814-e7bbac644ac4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbmlj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 etcd-ha-503856-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m6s
	  kube-system                 kindnet-hvszk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m8s
	  kube-system                 kube-apiserver-ha-503856-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-ha-503856-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-8xzr9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-scheduler-ha-503856-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-vip-ha-503856-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m3s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     5m8s                 cidrAllocator    Node ha-503856-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node ha-503856-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node ha-503856-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)  kubelet          Node ha-503856-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal  RegisteredNode           5m                   node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	
	
	Name:               ha-503856-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_33_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:33:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:37:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:33:41 +0000   Mon, 19 Aug 2024 11:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:33:41 +0000   Mon, 19 Aug 2024 11:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:33:41 +0000   Mon, 19 Aug 2024 11:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:33:41 +0000   Mon, 19 Aug 2024 11:33:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-503856-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fb3b2ab1e7b42139f0ea868d31218ff
	  System UUID:                9fb3b2ab-1e7b-4213-9f0e-a868d31218ff
	  Boot ID:                    64f131f4-dcd9-4d4f-be79-4fd66dede958
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h29sh       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-proxy-4kpcq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  CIDRAssignmentFailed     4m4s                 cidrAllocator    Node ha-503856-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m4s (x2 over 4m5s)  kubelet          Node ha-503856-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x2 over 4m5s)  kubelet          Node ha-503856-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x2 over 4m5s)  kubelet          Node ha-503856-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal  NodeReady                3m45s                kubelet          Node ha-503856-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 11:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047854] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036981] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.730064] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.920232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.453643] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.042926] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.060482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062102] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.195986] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.137965] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.282518] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.003020] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.667712] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.056031] kauditd_printk_skb: 158 callbacks suppressed
	[Aug19 11:30] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.046565] kauditd_printk_skb: 60 callbacks suppressed
	[Aug19 11:31] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a] <==
	{"level":"warn","ts":"2024-08-19T11:37:15.083958Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.183934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.223326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.283565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.384385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.457869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.463926Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.465516Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.469130Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.473651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.478995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.483820Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.485416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.490856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.494724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.497856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.504441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.511931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.519223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.523648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.527236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.533812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.539691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.545576Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T11:37:15.583729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:37:15 up 7 min,  0 users,  load average: 0.53, 0.32, 0.17
	Linux ha-503856 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50] <==
	I0819 11:36:43.540840       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:36:53.539314       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:36:53.539455       1 main.go:299] handling current node
	I0819 11:36:53.539494       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:36:53.539578       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:36:53.539803       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:36:53.539865       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:36:53.539968       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:36:53.540000       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:37:03.538457       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:37:03.538505       1 main.go:299] handling current node
	I0819 11:37:03.538519       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:37:03.538524       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:37:03.538673       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:37:03.538694       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:37:03.538749       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:37:03.538754       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:37:13.530509       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:37:13.530603       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:37:13.530823       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:37:13.530856       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:37:13.530930       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:37:13.530938       1 main.go:299] handling current node
	I0819 11:37:13.530955       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:37:13.530960       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e] <==
	I0819 11:30:05.944658       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 11:30:05.962433       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 11:30:05.972121       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 11:30:07.781535       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 11:30:07.870178       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0819 11:32:08.060799       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0819 11:32:08.061328       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 322.299µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0819 11:32:08.062189       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0819 11:32:08.063475       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0819 11:32:08.065800       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.098798ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0819 11:32:39.975641       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54540: use of closed network connection
	E0819 11:32:40.165200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54556: use of closed network connection
	E0819 11:32:40.364705       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54574: use of closed network connection
	E0819 11:32:40.560779       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54592: use of closed network connection
	E0819 11:32:40.746140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54616: use of closed network connection
	E0819 11:32:40.925040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54636: use of closed network connection
	E0819 11:32:41.094328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54650: use of closed network connection
	E0819 11:32:41.277612       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54666: use of closed network connection
	E0819 11:32:41.452804       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54688: use of closed network connection
	E0819 11:32:41.750815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54714: use of closed network connection
	E0819 11:32:41.936432       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54740: use of closed network connection
	E0819 11:32:42.120632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54764: use of closed network connection
	E0819 11:32:42.300925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54786: use of closed network connection
	E0819 11:32:42.487047       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	W0819 11:34:02.184667       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.122]
	
	
	==> kube-controller-manager [df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e] <==
	I0819 11:33:11.171483       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-503856-m04" podCIDRs=["10.244.4.0/24"]
	I0819 11:33:11.171595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:11.171654       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:11.180152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:11.444236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:11.838190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:12.194495       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-503856-m04"
	I0819 11:33:12.279464       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:13.960134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:14.009602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:15.267879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:15.302991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:21.512563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:30.356372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:30.356464       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-503856-m04"
	I0819 11:33:30.371949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:32.207322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:33:41.837921       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:34:30.300213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	I0819 11:34:30.300638       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-503856-m04"
	I0819 11:34:30.320643       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	I0819 11:34:30.353664       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.521527ms"
	I0819 11:34:30.353908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.913µs"
	I0819 11:34:32.311614       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	I0819 11:34:35.497733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	
	
	==> kube-proxy [68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 11:30:08.502363       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 11:30:08.511263       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E0819 11:30:08.511399       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:30:08.545498       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 11:30:08.545608       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 11:30:08.545648       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:30:08.548637       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:30:08.549020       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:30:08.549220       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:30:08.550765       1 config.go:197] "Starting service config controller"
	I0819 11:30:08.550867       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:30:08.550913       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:30:08.550930       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:30:08.551577       1 config.go:326] "Starting node config controller"
	I0819 11:30:08.551621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:30:08.651853       1 shared_informer.go:320] Caches are synced for node config
	I0819 11:30:08.652003       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:30:08.652014       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674] <==
	W0819 11:30:01.347827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:30:01.349511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.452894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:30:01.453008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.486121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 11:30:01.486169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.488303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:30:01.488336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.523022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:30:01.523111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.525952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:30:01.526028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:30:01.659419       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:30:01.660325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 11:30:03.075764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 11:33:11.218857       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-h29sh\": pod kindnet-h29sh is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-h29sh" node="ha-503856-m04"
	E0819 11:33:11.219015       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-h29sh\": pod kindnet-h29sh is already assigned to node \"ha-503856-m04\"" pod="kube-system/kindnet-h29sh"
	E0819 11:33:11.221900       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4kpcq\": pod kube-proxy-4kpcq is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4kpcq" node="ha-503856-m04"
	E0819 11:33:11.221962       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f038ca5-2e98-4126-9959-f24f6ab3a802(kube-system/kube-proxy-4kpcq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4kpcq"
	E0819 11:33:11.221977       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4kpcq\": pod kube-proxy-4kpcq is already assigned to node \"ha-503856-m04\"" pod="kube-system/kube-proxy-4kpcq"
	I0819 11:33:11.222009       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4kpcq" node="ha-503856-m04"
	E0819 11:33:11.260369       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5zzk5\": pod kube-proxy-5zzk5 is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5zzk5" node="ha-503856-m04"
	E0819 11:33:11.260439       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 29216c29-6ceb-411d-a714-c94d674aed3f(kube-system/kube-proxy-5zzk5) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5zzk5"
	E0819 11:33:11.260454       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5zzk5\": pod kube-proxy-5zzk5 is already assigned to node \"ha-503856-m04\"" pod="kube-system/kube-proxy-5zzk5"
	I0819 11:33:11.260471       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5zzk5" node="ha-503856-m04"
	
	
	==> kubelet <==
	Aug 19 11:35:56 ha-503856 kubelet[1331]: E0819 11:35:56.000699    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067356000397456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:05 ha-503856 kubelet[1331]: E0819 11:36:05.899682    1331 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 11:36:05 ha-503856 kubelet[1331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 11:36:05 ha-503856 kubelet[1331]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 11:36:05 ha-503856 kubelet[1331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 11:36:05 ha-503856 kubelet[1331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 11:36:06 ha-503856 kubelet[1331]: E0819 11:36:06.002213    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067366001919127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:06 ha-503856 kubelet[1331]: E0819 11:36:06.002270    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067366001919127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:16 ha-503856 kubelet[1331]: E0819 11:36:16.004425    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067376003740509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:16 ha-503856 kubelet[1331]: E0819 11:36:16.004900    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067376003740509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:26 ha-503856 kubelet[1331]: E0819 11:36:26.008685    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067386008038414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:26 ha-503856 kubelet[1331]: E0819 11:36:26.008733    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067386008038414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:36 ha-503856 kubelet[1331]: E0819 11:36:36.010399    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067396009967518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:36 ha-503856 kubelet[1331]: E0819 11:36:36.010440    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067396009967518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:46 ha-503856 kubelet[1331]: E0819 11:36:46.012620    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067406012355157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:46 ha-503856 kubelet[1331]: E0819 11:36:46.012685    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067406012355157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:56 ha-503856 kubelet[1331]: E0819 11:36:56.014340    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067416014024680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:36:56 ha-503856 kubelet[1331]: E0819 11:36:56.014366    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067416014024680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:37:05 ha-503856 kubelet[1331]: E0819 11:37:05.896284    1331 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 11:37:05 ha-503856 kubelet[1331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 11:37:05 ha-503856 kubelet[1331]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 11:37:05 ha-503856 kubelet[1331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 11:37:05 ha-503856 kubelet[1331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 11:37:06 ha-503856 kubelet[1331]: E0819 11:37:06.017023    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067426016018165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:37:06 ha-503856 kubelet[1331]: E0819 11:37:06.017084    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067426016018165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-503856 -n ha-503856
helpers_test.go:261: (dbg) Run:  kubectl --context ha-503856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (403.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-503856 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-503856 -v=7 --alsologtostderr
E0819 11:38:35.348215  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:39:03.051309  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-503856 -v=7 --alsologtostderr: exit status 82 (2m1.80794922s)

                                                
                                                
-- stdout --
	* Stopping node "ha-503856-m04"  ...
	* Stopping node "ha-503856-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:17.019880  127165 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:17.020107  127165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:17.020115  127165 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:17.020119  127165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:17.020300  127165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:37:17.020522  127165 out.go:352] Setting JSON to false
	I0819 11:37:17.020608  127165 mustload.go:65] Loading cluster: ha-503856
	I0819 11:37:17.020971  127165 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:37:17.021054  127165 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:37:17.021228  127165 mustload.go:65] Loading cluster: ha-503856
	I0819 11:37:17.021352  127165 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:37:17.021379  127165 stop.go:39] StopHost: ha-503856-m04
	I0819 11:37:17.021761  127165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:17.021806  127165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:17.037113  127165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32827
	I0819 11:37:17.037696  127165 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:17.038368  127165 main.go:141] libmachine: Using API Version  1
	I0819 11:37:17.038397  127165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:17.038801  127165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:17.041355  127165 out.go:177] * Stopping node "ha-503856-m04"  ...
	I0819 11:37:17.042674  127165 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 11:37:17.042718  127165 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:37:17.043004  127165 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 11:37:17.043048  127165 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:37:17.046192  127165 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:37:17.046699  127165 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:32:57 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:37:17.046731  127165 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:37:17.046899  127165 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:37:17.047128  127165 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:37:17.047286  127165 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:37:17.047452  127165 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:37:17.131957  127165 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 11:37:17.186273  127165 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 11:37:17.241708  127165 main.go:141] libmachine: Stopping "ha-503856-m04"...
	I0819 11:37:17.241753  127165 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:37:17.243515  127165 main.go:141] libmachine: (ha-503856-m04) Calling .Stop
	I0819 11:37:17.247350  127165 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 0/120
	I0819 11:37:18.353721  127165 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:37:18.355105  127165 main.go:141] libmachine: Machine "ha-503856-m04" was stopped.
	I0819 11:37:18.355123  127165 stop.go:75] duration metric: took 1.312454671s to stop
	I0819 11:37:18.355143  127165 stop.go:39] StopHost: ha-503856-m03
	I0819 11:37:18.355471  127165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:37:18.355512  127165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:37:18.370778  127165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0819 11:37:18.371230  127165 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:37:18.371738  127165 main.go:141] libmachine: Using API Version  1
	I0819 11:37:18.371762  127165 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:37:18.372097  127165 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:37:18.374266  127165 out.go:177] * Stopping node "ha-503856-m03"  ...
	I0819 11:37:18.375594  127165 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 11:37:18.375618  127165 main.go:141] libmachine: (ha-503856-m03) Calling .DriverName
	I0819 11:37:18.375879  127165 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 11:37:18.375904  127165 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHHostname
	I0819 11:37:18.378907  127165 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:37:18.379408  127165 main.go:141] libmachine: (ha-503856-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:1f:39", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:31:34 +0000 UTC Type:0 Mac:52:54:00:10:1f:39 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-503856-m03 Clientid:01:52:54:00:10:1f:39}
	I0819 11:37:18.379448  127165 main.go:141] libmachine: (ha-503856-m03) DBG | domain ha-503856-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:10:1f:39 in network mk-ha-503856
	I0819 11:37:18.379683  127165 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHPort
	I0819 11:37:18.379872  127165 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHKeyPath
	I0819 11:37:18.380036  127165 main.go:141] libmachine: (ha-503856-m03) Calling .GetSSHUsername
	I0819 11:37:18.380162  127165 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m03/id_rsa Username:docker}
	I0819 11:37:18.462564  127165 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 11:37:18.515965  127165 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 11:37:18.570128  127165 main.go:141] libmachine: Stopping "ha-503856-m03"...
	I0819 11:37:18.570152  127165 main.go:141] libmachine: (ha-503856-m03) Calling .GetState
	I0819 11:37:18.571900  127165 main.go:141] libmachine: (ha-503856-m03) Calling .Stop
	I0819 11:37:18.575344  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 0/120
	I0819 11:37:19.577662  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 1/120
	I0819 11:37:20.579895  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 2/120
	I0819 11:37:21.581071  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 3/120
	I0819 11:37:22.582729  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 4/120
	I0819 11:37:23.585280  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 5/120
	I0819 11:37:24.586954  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 6/120
	I0819 11:37:25.588641  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 7/120
	I0819 11:37:26.590175  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 8/120
	I0819 11:37:27.591802  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 9/120
	I0819 11:37:28.594235  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 10/120
	I0819 11:37:29.595941  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 11/120
	I0819 11:37:30.598485  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 12/120
	I0819 11:37:31.599919  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 13/120
	I0819 11:37:32.601579  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 14/120
	I0819 11:37:33.603443  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 15/120
	I0819 11:37:34.604971  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 16/120
	I0819 11:37:35.606450  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 17/120
	I0819 11:37:36.607961  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 18/120
	I0819 11:37:37.609567  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 19/120
	I0819 11:37:38.611790  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 20/120
	I0819 11:37:39.613368  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 21/120
	I0819 11:37:40.615175  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 22/120
	I0819 11:37:41.616679  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 23/120
	I0819 11:37:42.618479  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 24/120
	I0819 11:37:43.620716  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 25/120
	I0819 11:37:44.622450  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 26/120
	I0819 11:37:45.624061  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 27/120
	I0819 11:37:46.625398  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 28/120
	I0819 11:37:47.627120  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 29/120
	I0819 11:37:48.629007  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 30/120
	I0819 11:37:49.630560  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 31/120
	I0819 11:37:50.631994  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 32/120
	I0819 11:37:51.634418  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 33/120
	I0819 11:37:52.636098  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 34/120
	I0819 11:37:53.638022  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 35/120
	I0819 11:37:54.639899  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 36/120
	I0819 11:37:55.642357  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 37/120
	I0819 11:37:56.643744  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 38/120
	I0819 11:37:57.645252  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 39/120
	I0819 11:37:58.646762  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 40/120
	I0819 11:37:59.648097  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 41/120
	I0819 11:38:00.649646  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 42/120
	I0819 11:38:01.651104  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 43/120
	I0819 11:38:02.652687  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 44/120
	I0819 11:38:03.654581  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 45/120
	I0819 11:38:04.656130  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 46/120
	I0819 11:38:05.657849  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 47/120
	I0819 11:38:06.659549  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 48/120
	I0819 11:38:07.661211  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 49/120
	I0819 11:38:08.662895  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 50/120
	I0819 11:38:09.664249  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 51/120
	I0819 11:38:10.666320  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 52/120
	I0819 11:38:11.667586  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 53/120
	I0819 11:38:12.669270  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 54/120
	I0819 11:38:13.671175  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 55/120
	I0819 11:38:14.672484  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 56/120
	I0819 11:38:15.674216  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 57/120
	I0819 11:38:16.675682  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 58/120
	I0819 11:38:17.677120  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 59/120
	I0819 11:38:18.678974  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 60/120
	I0819 11:38:19.680270  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 61/120
	I0819 11:38:20.681850  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 62/120
	I0819 11:38:21.683211  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 63/120
	I0819 11:38:22.684851  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 64/120
	I0819 11:38:23.686682  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 65/120
	I0819 11:38:24.688238  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 66/120
	I0819 11:38:25.689868  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 67/120
	I0819 11:38:26.691248  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 68/120
	I0819 11:38:27.692818  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 69/120
	I0819 11:38:28.694679  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 70/120
	I0819 11:38:29.696030  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 71/120
	I0819 11:38:30.698523  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 72/120
	I0819 11:38:31.699859  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 73/120
	I0819 11:38:32.701365  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 74/120
	I0819 11:38:33.703430  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 75/120
	I0819 11:38:34.704987  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 76/120
	I0819 11:38:35.706516  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 77/120
	I0819 11:38:36.707931  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 78/120
	I0819 11:38:37.709642  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 79/120
	I0819 11:38:38.711945  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 80/120
	I0819 11:38:39.714213  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 81/120
	I0819 11:38:40.715700  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 82/120
	I0819 11:38:41.717006  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 83/120
	I0819 11:38:42.718551  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 84/120
	I0819 11:38:43.720545  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 85/120
	I0819 11:38:44.721846  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 86/120
	I0819 11:38:45.723085  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 87/120
	I0819 11:38:46.724524  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 88/120
	I0819 11:38:47.726058  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 89/120
	I0819 11:38:48.728094  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 90/120
	I0819 11:38:49.729394  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 91/120
	I0819 11:38:50.731077  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 92/120
	I0819 11:38:51.732403  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 93/120
	I0819 11:38:52.733888  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 94/120
	I0819 11:38:53.735960  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 95/120
	I0819 11:38:54.738320  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 96/120
	I0819 11:38:55.739688  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 97/120
	I0819 11:38:56.741228  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 98/120
	I0819 11:38:57.742635  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 99/120
	I0819 11:38:58.744484  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 100/120
	I0819 11:38:59.746150  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 101/120
	I0819 11:39:00.747561  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 102/120
	I0819 11:39:01.749319  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 103/120
	I0819 11:39:02.750770  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 104/120
	I0819 11:39:03.752956  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 105/120
	I0819 11:39:04.754467  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 106/120
	I0819 11:39:05.756048  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 107/120
	I0819 11:39:06.757650  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 108/120
	I0819 11:39:07.759249  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 109/120
	I0819 11:39:08.761046  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 110/120
	I0819 11:39:09.762603  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 111/120
	I0819 11:39:10.764256  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 112/120
	I0819 11:39:11.765967  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 113/120
	I0819 11:39:12.767513  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 114/120
	I0819 11:39:13.769580  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 115/120
	I0819 11:39:14.771168  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 116/120
	I0819 11:39:15.772669  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 117/120
	I0819 11:39:16.774307  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 118/120
	I0819 11:39:17.775967  127165 main.go:141] libmachine: (ha-503856-m03) Waiting for machine to stop 119/120
	I0819 11:39:18.777484  127165 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 11:39:18.777554  127165 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 11:39:18.779431  127165 out.go:201] 
	W0819 11:39:18.780650  127165 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 11:39:18.780665  127165 out.go:270] * 
	* 
	W0819 11:39:18.782740  127165 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:39:18.784103  127165 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-503856 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-503856 --wait=true -v=7 --alsologtostderr
E0819 11:43:35.347814  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-503856 --wait=true -v=7 --alsologtostderr: (4m39.530200804s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-503856
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-503856 -n ha-503856
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-503856 logs -n 25: (1.649081654s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m02:/home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m02 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04:/home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m04 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp testdata/cp-test.txt                                                | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856:/home/docker/cp-test_ha-503856-m04_ha-503856.txt                       |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856 sudo cat                                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856.txt                                 |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m02:/home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m02 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03:/home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m03 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-503856 node stop m02 -v=7                                                     | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-503856 node start m02 -v=7                                                    | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-503856 -v=7                                                           | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-503856 -v=7                                                                | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-503856 --wait=true -v=7                                                    | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:39 UTC | 19 Aug 24 11:43 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-503856                                                                | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:43 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:39:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:39:18.835403  127629 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:39:18.835548  127629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:39:18.835557  127629 out.go:358] Setting ErrFile to fd 2...
	I0819 11:39:18.835561  127629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:39:18.835776  127629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:39:18.836362  127629 out.go:352] Setting JSON to false
	I0819 11:39:18.837320  127629 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4905,"bootTime":1724062654,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:39:18.837405  127629 start.go:139] virtualization: kvm guest
	I0819 11:39:18.839741  127629 out.go:177] * [ha-503856] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:39:18.841149  127629 notify.go:220] Checking for updates...
	I0819 11:39:18.841168  127629 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:39:18.842417  127629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:39:18.843589  127629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:39:18.844833  127629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:39:18.846140  127629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:39:18.847529  127629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:39:18.849166  127629 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:39:18.849269  127629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:39:18.849703  127629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:39:18.849759  127629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:39:18.865677  127629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0819 11:39:18.866196  127629 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:39:18.866751  127629 main.go:141] libmachine: Using API Version  1
	I0819 11:39:18.866771  127629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:39:18.867096  127629 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:39:18.867305  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:39:18.905566  127629 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 11:39:18.906664  127629 start.go:297] selected driver: kvm2
	I0819 11:39:18.906695  127629 start.go:901] validating driver "kvm2" against &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:39:18.906948  127629 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:39:18.907463  127629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:39:18.907580  127629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 11:39:18.924433  127629 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 11:39:18.925244  127629 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:39:18.925301  127629 cni.go:84] Creating CNI manager for ""
	I0819 11:39:18.925314  127629 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 11:39:18.925378  127629 start.go:340] cluster config:
	{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:39:18.925537  127629 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:39:18.927195  127629 out.go:177] * Starting "ha-503856" primary control-plane node in "ha-503856" cluster
	I0819 11:39:18.928386  127629 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:39:18.928432  127629 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:39:18.928440  127629 cache.go:56] Caching tarball of preloaded images
	I0819 11:39:18.928546  127629 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:39:18.928558  127629 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:39:18.928673  127629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:39:18.928924  127629 start.go:360] acquireMachinesLock for ha-503856: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:39:18.928968  127629 start.go:364] duration metric: took 25.817µs to acquireMachinesLock for "ha-503856"
	I0819 11:39:18.928985  127629 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:39:18.928990  127629 fix.go:54] fixHost starting: 
	I0819 11:39:18.929239  127629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:39:18.929279  127629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:39:18.944048  127629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43587
	I0819 11:39:18.944467  127629 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:39:18.944960  127629 main.go:141] libmachine: Using API Version  1
	I0819 11:39:18.944987  127629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:39:18.945391  127629 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:39:18.945627  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:39:18.945784  127629 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:39:18.947504  127629 fix.go:112] recreateIfNeeded on ha-503856: state=Running err=<nil>
	W0819 11:39:18.947528  127629 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:39:18.949220  127629 out.go:177] * Updating the running kvm2 "ha-503856" VM ...
	I0819 11:39:18.950266  127629 machine.go:93] provisionDockerMachine start ...
	I0819 11:39:18.950343  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:39:18.950570  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:18.953405  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:18.953875  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:18.953900  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:18.954107  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:18.954376  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:18.954522  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:18.954787  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:18.955050  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:39:18.955240  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:39:18.955251  127629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:39:19.060101  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856
	
	I0819 11:39:19.060132  127629 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:39:19.060440  127629 buildroot.go:166] provisioning hostname "ha-503856"
	I0819 11:39:19.060468  127629 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:39:19.060670  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.063532  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.064054  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.064083  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.064285  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:19.064484  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.064669  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.064816  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:19.064994  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:39:19.065217  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:39:19.065235  127629 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-503856 && echo "ha-503856" | sudo tee /etc/hostname
	I0819 11:39:19.187967  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856
	
	I0819 11:39:19.187999  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.190894  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.191262  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.191291  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.191495  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:19.191684  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.191864  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.192056  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:19.192257  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:39:19.192460  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:39:19.192478  127629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-503856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-503856/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-503856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:39:19.300156  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:39:19.300186  127629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 11:39:19.300211  127629 buildroot.go:174] setting up certificates
	I0819 11:39:19.300223  127629 provision.go:84] configureAuth start
	I0819 11:39:19.300237  127629 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:39:19.300535  127629 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:39:19.302711  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.303124  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.303146  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.303383  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.305839  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.306292  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.306320  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.306460  127629 provision.go:143] copyHostCerts
	I0819 11:39:19.306492  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:39:19.306552  127629 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 11:39:19.306572  127629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:39:19.306642  127629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 11:39:19.306723  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:39:19.306740  127629 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 11:39:19.306747  127629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:39:19.306773  127629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 11:39:19.306816  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:39:19.306832  127629 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 11:39:19.306838  127629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:39:19.306858  127629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 11:39:19.306929  127629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.ha-503856 san=[127.0.0.1 192.168.39.102 ha-503856 localhost minikube]
	I0819 11:39:19.392917  127629 provision.go:177] copyRemoteCerts
	I0819 11:39:19.392986  127629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:39:19.393017  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.395971  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.396305  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.396332  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.396594  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:19.396794  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.396956  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:19.397099  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:39:19.481789  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 11:39:19.481872  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:39:19.508192  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 11:39:19.508277  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:39:19.534486  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 11:39:19.534572  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 11:39:19.562207  127629 provision.go:87] duration metric: took 261.970066ms to configureAuth
	I0819 11:39:19.562237  127629 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:39:19.562442  127629 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:39:19.562558  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.565685  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.566141  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.566168  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.566536  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:19.566770  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.566919  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.567098  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:19.567236  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:39:19.567409  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:39:19.567424  127629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:40:50.425779  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:40:50.425809  127629 machine.go:96] duration metric: took 1m31.475523752s to provisionDockerMachine
	I0819 11:40:50.425836  127629 start.go:293] postStartSetup for "ha-503856" (driver="kvm2")
	I0819 11:40:50.425853  127629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:40:50.425884  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.426290  127629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:40:50.426326  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.429727  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.430290  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.430321  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.430509  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.430721  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.430915  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.431060  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:40:50.514384  127629 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:40:50.518799  127629 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:40:50.518826  127629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 11:40:50.518893  127629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 11:40:50.518974  127629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 11:40:50.518991  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 11:40:50.519106  127629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:40:50.528289  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:40:50.552292  127629 start.go:296] duration metric: took 126.436353ms for postStartSetup
	I0819 11:40:50.552339  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.552637  127629 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 11:40:50.552667  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.555339  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.555712  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.555750  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.555908  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.556112  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.556313  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.556470  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	W0819 11:40:50.637638  127629 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 11:40:50.637672  127629 fix.go:56] duration metric: took 1m31.708680394s for fixHost
	I0819 11:40:50.637698  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.640384  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.640764  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.640784  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.640965  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.641185  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.641354  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.641496  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.641661  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:40:50.641906  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:40:50.641922  127629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:40:50.748525  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724067650.701950950
	
	I0819 11:40:50.748553  127629 fix.go:216] guest clock: 1724067650.701950950
	I0819 11:40:50.748564  127629 fix.go:229] Guest: 2024-08-19 11:40:50.70195095 +0000 UTC Remote: 2024-08-19 11:40:50.63768201 +0000 UTC m=+91.842315702 (delta=64.26894ms)
	I0819 11:40:50.748597  127629 fix.go:200] guest clock delta is within tolerance: 64.26894ms
	I0819 11:40:50.748603  127629 start.go:83] releasing machines lock for "ha-503856", held for 1m31.819625092s
	I0819 11:40:50.748621  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.748909  127629 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:40:50.751570  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.751936  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.751971  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.752145  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.752690  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.752874  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.752975  127629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:40:50.753032  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.753082  127629 ssh_runner.go:195] Run: cat /version.json
	I0819 11:40:50.753104  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.755746  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.755776  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.756194  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.756222  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.756248  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.756265  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.756381  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.756500  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.756583  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.756668  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.756730  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.756787  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.756972  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:40:50.756966  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:40:50.832531  127629 ssh_runner.go:195] Run: systemctl --version
	I0819 11:40:50.857358  127629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:40:51.012982  127629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:40:51.018808  127629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:40:51.018891  127629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:40:51.028126  127629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 11:40:51.028310  127629 start.go:495] detecting cgroup driver to use...
	I0819 11:40:51.028396  127629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:40:51.048927  127629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:40:51.065769  127629 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:40:51.065840  127629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:40:51.080863  127629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:40:51.095976  127629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:40:51.252866  127629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:40:51.398292  127629 docker.go:233] disabling docker service ...
	I0819 11:40:51.398374  127629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:40:51.414328  127629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:40:51.428842  127629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:40:51.569966  127629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:40:51.729385  127629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:40:51.744302  127629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:40:51.765536  127629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:40:51.765600  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.777569  127629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:40:51.777630  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.789034  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.800086  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.811040  127629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:40:51.822108  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.833015  127629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.843687  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.855390  127629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:40:51.865315  127629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:40:51.875101  127629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:40:52.021627  127629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:40:56.357304  127629 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.335629559s)
	I0819 11:40:56.357350  127629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:40:56.357417  127629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:40:56.362242  127629 start.go:563] Will wait 60s for crictl version
	I0819 11:40:56.362311  127629 ssh_runner.go:195] Run: which crictl
	I0819 11:40:56.366061  127629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:40:56.404019  127629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:40:56.404113  127629 ssh_runner.go:195] Run: crio --version
	I0819 11:40:56.433638  127629 ssh_runner.go:195] Run: crio --version
	I0819 11:40:56.463695  127629 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:40:56.465071  127629 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:40:56.467771  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:56.468167  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:56.468197  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:56.468380  127629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:40:56.473075  127629 kubeadm.go:883] updating cluster {Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:40:56.473226  127629 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:40:56.473280  127629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:40:56.515674  127629 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:40:56.515697  127629 crio.go:433] Images already preloaded, skipping extraction
	I0819 11:40:56.515771  127629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:40:56.549406  127629 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:40:56.549434  127629 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:40:56.549447  127629 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.0 crio true true} ...
	I0819 11:40:56.549571  127629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-503856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:40:56.549663  127629 ssh_runner.go:195] Run: crio config
	I0819 11:40:56.599365  127629 cni.go:84] Creating CNI manager for ""
	I0819 11:40:56.599384  127629 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 11:40:56.599393  127629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:40:56.599416  127629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-503856 NodeName:ha-503856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:40:56.599609  127629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-503856"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:40:56.599631  127629 kube-vip.go:115] generating kube-vip config ...
	I0819 11:40:56.599674  127629 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 11:40:56.610772  127629 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 11:40:56.610903  127629 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 11:40:56.610966  127629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:40:56.620701  127629 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:40:56.620779  127629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 11:40:56.630324  127629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 11:40:56.647133  127629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:40:56.663873  127629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 11:40:56.680543  127629 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 11:40:56.698102  127629 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 11:40:56.702422  127629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:40:56.848366  127629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:40:56.863093  127629 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856 for IP: 192.168.39.102
	I0819 11:40:56.863115  127629 certs.go:194] generating shared ca certs ...
	I0819 11:40:56.863132  127629 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:40:56.863291  127629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 11:40:56.863327  127629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 11:40:56.863336  127629 certs.go:256] generating profile certs ...
	I0819 11:40:56.863403  127629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key
	I0819 11:40:56.863430  127629 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.755b656b
	I0819 11:40:56.863445  127629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.755b656b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.183 192.168.39.122 192.168.39.254]
	I0819 11:40:56.942096  127629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.755b656b ...
	I0819 11:40:56.942127  127629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.755b656b: {Name:mk7406fb59f8c51d1cb078d71f623a1983ecfb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:40:56.942291  127629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.755b656b ...
	I0819 11:40:56.942304  127629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.755b656b: {Name:mkab95e730345fbb832383ac7cee88f1454a2308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:40:56.942374  127629 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.755b656b -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt
	I0819 11:40:56.942527  127629 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.755b656b -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key
	I0819 11:40:56.942663  127629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key
	I0819 11:40:56.942683  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 11:40:56.942697  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 11:40:56.942708  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 11:40:56.942719  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 11:40:56.942732  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 11:40:56.942742  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 11:40:56.942762  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 11:40:56.942774  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 11:40:56.942823  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 11:40:56.942852  127629 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 11:40:56.942861  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:40:56.942884  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:40:56.942906  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:40:56.942929  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 11:40:56.942965  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:40:56.942991  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:40:56.943004  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 11:40:56.943016  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 11:40:56.943588  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:40:56.968838  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:40:56.992873  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:40:57.017654  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:40:57.043055  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 11:40:57.073204  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 11:40:57.097849  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:40:57.142529  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:40:57.213341  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:40:57.264119  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 11:40:57.301287  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 11:40:57.338348  127629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:40:57.364080  127629 ssh_runner.go:195] Run: openssl version
	I0819 11:40:57.369909  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:40:57.382106  127629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:40:57.390351  127629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:40:57.390420  127629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:40:57.403675  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:40:57.426957  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 11:40:57.450316  127629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 11:40:57.462052  127629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 11:40:57.462112  127629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 11:40:57.487389  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 11:40:57.507133  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 11:40:57.539008  127629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 11:40:57.545366  127629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 11:40:57.545427  127629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 11:40:57.554586  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:40:57.570725  127629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:40:57.576562  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 11:40:57.584050  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 11:40:57.591568  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 11:40:57.599644  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 11:40:57.609167  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 11:40:57.615684  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 11:40:57.622392  127629 kubeadm.go:392] StartCluster: {Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:40:57.622526  127629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 11:40:57.622582  127629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:40:57.674980  127629 cri.go:89] found id: "234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b"
	I0819 11:40:57.675008  127629 cri.go:89] found id: "bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16"
	I0819 11:40:57.675012  127629 cri.go:89] found id: "cc7a981129a72e9a8516ad8f5935ff94bca370deb2b9406a0bd5d1d7b4f2adbc"
	I0819 11:40:57.675014  127629 cri.go:89] found id: "c6dba5fc1adfbc807731637b2922d432cd2b239352bf687cff7bee78b45d9342"
	I0819 11:40:57.675017  127629 cri.go:89] found id: "9d70a071997dc45b95134d59cd17221dc42d56b4b491ef282663f00bf9876fe1"
	I0819 11:40:57.675020  127629 cri.go:89] found id: "6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39"
	I0819 11:40:57.675023  127629 cri.go:89] found id: "e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de"
	I0819 11:40:57.675026  127629 cri.go:89] found id: "8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464"
	I0819 11:40:57.675028  127629 cri.go:89] found id: "1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50"
	I0819 11:40:57.675032  127629 cri.go:89] found id: "68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3"
	I0819 11:40:57.675047  127629 cri.go:89] found id: "11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112"
	I0819 11:40:57.675049  127629 cri.go:89] found id: "ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674"
	I0819 11:40:57.675053  127629 cri.go:89] found id: "3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a"
	I0819 11:40:57.675055  127629 cri.go:89] found id: "c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e"
	I0819 11:40:57.675060  127629 cri.go:89] found id: "df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e"
	I0819 11:40:57.675073  127629 cri.go:89] found id: ""
	I0819 11:40:57.675121  127629 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 11:43:58 ha-503856 crio[3547]: time="2024-08-19 11:43:58.996037946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067838996003906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04f51254-af34-4015-88f2-448cd2fc8969 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:43:58 ha-503856 crio[3547]: time="2024-08-19 11:43:58.996578283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10d7f43f-3b4a-4c42-928d-f9bbe49bb100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:58 ha-503856 crio[3547]: time="2024-08-19 11:43:58.996649025Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10d7f43f-3b4a-4c42-928d-f9bbe49bb100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:58 ha-503856 crio[3547]: time="2024-08-19 11:43:58.997028187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa382afea1ef150e804db6767bc2ba83ea34772a4ba82ac8fcf6e82f909e789a,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067739891631252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033,PodSandboxId:03684d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724067704893586022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4,PodSandboxId:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724067700894142300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c49405a138a480ceb145b0c06fa0c19c7e9d8739b6e902e7728ff9edf8773ac,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724067698892354341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14db04765cb31df5016a695bff0a72927c98c131587e975c013da68f3b4a36f1,PodSandboxId:7371637445eb2b64a8e1a3fe4f4176c338d5e58f62c46f68623a43115182d991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067697174958993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bfc361f15572573d2773ebd096cf49964223a5aa1d402102ecc37ecfb1a14,PodSandboxId:c7bf4b267d73728fabdc475b8c3bda405ee0766ebf825ebee802b0fff6f94280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724067677098549114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2609971bbd7c8401e4db81e3ea55d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80,PodSandboxId:0461b7050ad929e00c6b4d08d2a1b22768d5b113605a89f03461fdd36b55fcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724067663642043955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe,PodSandboxId:82794ac70c5b241d166513b9bc0cd6d94f8d4c39869df9c48ed62cbc0a955c04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724067663844109398,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4ca83f
34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384,PodSandboxId:23eac961d8ef87891beee2753a152ae3b108e9854351da0e44d9a6e733bf348b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724067663691206920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e253084db077adff3d7dc910bd0fc
7c7d6eb0d8b1b91bcd1ebce47ff183cf7c,PodSandboxId:eb9d374c74b751a8c3b1af72dadb750590b5950d5abc0f2ef83c9dc2a955eb0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724067663581594688,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d,PodSandboxId:036
84d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724067663575899089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543,PodSandboxI
d:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724067663488713880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b,PodSandboxId:6d5134c5654f95d6f33a
686c2308569408d964927584f53b946162d8b76980cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657399532799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16,PodSandboxId:9737bdaf9902ef57d23d291717c6b0ba89bce621b66977ea9dc7febc9b09e758,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657381769049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724067158500933131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024223183467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024221425951,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724067012620055458,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724067008316858973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724066997299442286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724066997297013428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10d7f43f-3b4a-4c42-928d-f9bbe49bb100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.045402580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c51925ad-6ded-47c0-8e89-5d348e49664b name=/runtime.v1.RuntimeService/Version
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.045481118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c51925ad-6ded-47c0-8e89-5d348e49664b name=/runtime.v1.RuntimeService/Version
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.046959772Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=356f0de1-c379-40f1-9017-6080a3c93015 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.047618592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067839047594567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=356f0de1-c379-40f1-9017-6080a3c93015 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.048342413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdbf4a54-75f8-40a4-b1da-eb7982734579 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.048399113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdbf4a54-75f8-40a4-b1da-eb7982734579 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.048985856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa382afea1ef150e804db6767bc2ba83ea34772a4ba82ac8fcf6e82f909e789a,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067739891631252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033,PodSandboxId:03684d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724067704893586022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4,PodSandboxId:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724067700894142300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c49405a138a480ceb145b0c06fa0c19c7e9d8739b6e902e7728ff9edf8773ac,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724067698892354341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14db04765cb31df5016a695bff0a72927c98c131587e975c013da68f3b4a36f1,PodSandboxId:7371637445eb2b64a8e1a3fe4f4176c338d5e58f62c46f68623a43115182d991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067697174958993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bfc361f15572573d2773ebd096cf49964223a5aa1d402102ecc37ecfb1a14,PodSandboxId:c7bf4b267d73728fabdc475b8c3bda405ee0766ebf825ebee802b0fff6f94280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724067677098549114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2609971bbd7c8401e4db81e3ea55d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80,PodSandboxId:0461b7050ad929e00c6b4d08d2a1b22768d5b113605a89f03461fdd36b55fcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724067663642043955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe,PodSandboxId:82794ac70c5b241d166513b9bc0cd6d94f8d4c39869df9c48ed62cbc0a955c04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724067663844109398,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4ca83f
34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384,PodSandboxId:23eac961d8ef87891beee2753a152ae3b108e9854351da0e44d9a6e733bf348b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724067663691206920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e253084db077adff3d7dc910bd0fc
7c7d6eb0d8b1b91bcd1ebce47ff183cf7c,PodSandboxId:eb9d374c74b751a8c3b1af72dadb750590b5950d5abc0f2ef83c9dc2a955eb0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724067663581594688,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d,PodSandboxId:036
84d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724067663575899089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543,PodSandboxI
d:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724067663488713880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b,PodSandboxId:6d5134c5654f95d6f33a
686c2308569408d964927584f53b946162d8b76980cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657399532799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16,PodSandboxId:9737bdaf9902ef57d23d291717c6b0ba89bce621b66977ea9dc7febc9b09e758,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657381769049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724067158500933131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024223183467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024221425951,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724067012620055458,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724067008316858973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724066997299442286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724066997297013428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdbf4a54-75f8-40a4-b1da-eb7982734579 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.093178746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2600f7af-eb41-4901-becb-25e837cc44b0 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.093262634Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2600f7af-eb41-4901-becb-25e837cc44b0 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.094859411Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24d23256-0f09-4339-868c-4e5b09ca8f84 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.095375007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067839095348076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24d23256-0f09-4339-868c-4e5b09ca8f84 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.096046665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e0397e0-c91d-43cf-86db-df21b979f656 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.096147695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e0397e0-c91d-43cf-86db-df21b979f656 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.096559186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa382afea1ef150e804db6767bc2ba83ea34772a4ba82ac8fcf6e82f909e789a,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067739891631252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033,PodSandboxId:03684d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724067704893586022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4,PodSandboxId:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724067700894142300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c49405a138a480ceb145b0c06fa0c19c7e9d8739b6e902e7728ff9edf8773ac,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724067698892354341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14db04765cb31df5016a695bff0a72927c98c131587e975c013da68f3b4a36f1,PodSandboxId:7371637445eb2b64a8e1a3fe4f4176c338d5e58f62c46f68623a43115182d991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067697174958993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bfc361f15572573d2773ebd096cf49964223a5aa1d402102ecc37ecfb1a14,PodSandboxId:c7bf4b267d73728fabdc475b8c3bda405ee0766ebf825ebee802b0fff6f94280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724067677098549114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2609971bbd7c8401e4db81e3ea55d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80,PodSandboxId:0461b7050ad929e00c6b4d08d2a1b22768d5b113605a89f03461fdd36b55fcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724067663642043955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe,PodSandboxId:82794ac70c5b241d166513b9bc0cd6d94f8d4c39869df9c48ed62cbc0a955c04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724067663844109398,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4ca83f
34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384,PodSandboxId:23eac961d8ef87891beee2753a152ae3b108e9854351da0e44d9a6e733bf348b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724067663691206920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e253084db077adff3d7dc910bd0fc
7c7d6eb0d8b1b91bcd1ebce47ff183cf7c,PodSandboxId:eb9d374c74b751a8c3b1af72dadb750590b5950d5abc0f2ef83c9dc2a955eb0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724067663581594688,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d,PodSandboxId:036
84d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724067663575899089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543,PodSandboxI
d:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724067663488713880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b,PodSandboxId:6d5134c5654f95d6f33a
686c2308569408d964927584f53b946162d8b76980cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657399532799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16,PodSandboxId:9737bdaf9902ef57d23d291717c6b0ba89bce621b66977ea9dc7febc9b09e758,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657381769049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724067158500933131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024223183467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024221425951,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724067012620055458,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724067008316858973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724066997299442286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724066997297013428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e0397e0-c91d-43cf-86db-df21b979f656 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.138670010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e04252ac-2a9d-4d42-bdbd-ca337e892922 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.138741766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e04252ac-2a9d-4d42-bdbd-ca337e892922 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.140025474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d42e70ad-c950-4b72-9916-9d08b6f6c850 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.140527626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067839140501855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d42e70ad-c950-4b72-9916-9d08b6f6c850 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.141250272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5edfa22e-3169-4dbc-a46b-30bd7f0f9f0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.141310672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5edfa22e-3169-4dbc-a46b-30bd7f0f9f0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:43:59 ha-503856 crio[3547]: time="2024-08-19 11:43:59.144267980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa382afea1ef150e804db6767bc2ba83ea34772a4ba82ac8fcf6e82f909e789a,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067739891631252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033,PodSandboxId:03684d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724067704893586022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4,PodSandboxId:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724067700894142300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c49405a138a480ceb145b0c06fa0c19c7e9d8739b6e902e7728ff9edf8773ac,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724067698892354341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14db04765cb31df5016a695bff0a72927c98c131587e975c013da68f3b4a36f1,PodSandboxId:7371637445eb2b64a8e1a3fe4f4176c338d5e58f62c46f68623a43115182d991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067697174958993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bfc361f15572573d2773ebd096cf49964223a5aa1d402102ecc37ecfb1a14,PodSandboxId:c7bf4b267d73728fabdc475b8c3bda405ee0766ebf825ebee802b0fff6f94280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724067677098549114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2609971bbd7c8401e4db81e3ea55d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80,PodSandboxId:0461b7050ad929e00c6b4d08d2a1b22768d5b113605a89f03461fdd36b55fcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724067663642043955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe,PodSandboxId:82794ac70c5b241d166513b9bc0cd6d94f8d4c39869df9c48ed62cbc0a955c04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724067663844109398,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4ca83f
34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384,PodSandboxId:23eac961d8ef87891beee2753a152ae3b108e9854351da0e44d9a6e733bf348b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724067663691206920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e253084db077adff3d7dc910bd0fc
7c7d6eb0d8b1b91bcd1ebce47ff183cf7c,PodSandboxId:eb9d374c74b751a8c3b1af72dadb750590b5950d5abc0f2ef83c9dc2a955eb0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724067663581594688,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d,PodSandboxId:036
84d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724067663575899089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543,PodSandboxI
d:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724067663488713880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b,PodSandboxId:6d5134c5654f95d6f33a
686c2308569408d964927584f53b946162d8b76980cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657399532799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16,PodSandboxId:9737bdaf9902ef57d23d291717c6b0ba89bce621b66977ea9dc7febc9b09e758,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657381769049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724067158500933131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024223183467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024221425951,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724067012620055458,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724067008316858973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724066997299442286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724066997297013428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5edfa22e-3169-4dbc-a46b-30bd7f0f9f0c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fa382afea1ef1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   1b2812ce91f86       storage-provisioner
	21e59f8533645       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Running             kube-controller-manager   2                   03684d4d6f924       kube-controller-manager-ha-503856
	4c7e0e730267c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   2c1006e249933       kube-apiserver-ha-503856
	9c49405a138a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   1b2812ce91f86       storage-provisioner
	14db04765cb31       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   7371637445eb2       busybox-7dff88458-7wpbx
	0f3bfc361f155       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   c7bf4b267d737       kube-vip-ha-503856
	85182c790c374       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   82794ac70c5b2       kindnet-st2mx
	b4d4ca83f3458       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   23eac961d8ef8       kube-scheduler-ha-503856
	4410418eb581e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   0461b7050ad92       kube-proxy-d6zw9
	9e253084db077       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   eb9d374c74b75       etcd-ha-503856
	bb5dc24c345e0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   03684d4d6f924       kube-controller-manager-ha-503856
	db6351db80fdc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   2c1006e249933       kube-apiserver-ha-503856
	234d581f1f247       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   6d5134c5654f9       coredns-6f6b679f8f-2jdlw
	bb00d7b27f13d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   9737bdaf9902e       coredns-6f6b679f8f-5dbrz
	56a5ad9cc18e7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   1191cb555eb55       busybox-7dff88458-7wpbx
	e67513ebd15d0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   13c07aa9a0025       coredns-6f6b679f8f-5dbrz
	8315e44800080       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   0b0b0a070f3ec       coredns-6f6b679f8f-2jdlw
	1964134e9de80       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   9079c84056e4b       kindnet-st2mx
	68730d308f145       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   adace0914115c       kube-proxy-d6zw9
	ccea80d1a22a4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      14 minutes ago       Exited              kube-scheduler            0                   982016c43ab0e       kube-scheduler-ha-503856
	3879d2de39f1c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      14 minutes ago       Exited              etcd                      0                   eb7c9eb1ba042       etcd-ha-503856
	
	
	==> coredns [234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b] <==
	Trace[835521008]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:41:16.415)
	Trace[835521008]: [10.001471487s] [10.001471487s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1606563320]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 11:41:09.196) (total time: 10001ms):
	Trace[1606563320]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:41:19.198)
	Trace[1606563320]: [10.001554405s] [10.001554405s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:46290->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:46290->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464] <==
	[INFO] 10.244.3.2:59991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001295677s
	[INFO] 10.244.3.2:36199 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168276s
	[INFO] 10.244.3.2:56390 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118777s
	[INFO] 10.244.3.2:60188 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110134s
	[INFO] 10.244.1.2:48283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110043s
	[INFO] 10.244.1.2:47868 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001551069s
	[INFO] 10.244.1.2:40080 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132463s
	[INFO] 10.244.1.2:39365 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001154088s
	[INFO] 10.244.1.2:42435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074226s
	[INFO] 10.244.0.4:41562 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076296s
	[INFO] 10.244.0.4:56190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067218s
	[INFO] 10.244.3.2:36444 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119378s
	[INFO] 10.244.3.2:38880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151765s
	[INFO] 10.244.1.2:43281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016005s
	[INFO] 10.244.1.2:44768 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098293s
	[INFO] 10.244.0.4:42211 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129163s
	[INFO] 10.244.0.4:53178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082891s
	[INFO] 10.244.3.2:39486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118564s
	[INFO] 10.244.3.2:46262 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112723s
	[INFO] 10.244.3.2:50068 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106233s
	[INFO] 10.244.1.2:43781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134028s
	[INFO] 10.244.1.2:47607 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071487s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1857&timeout=6m44s&timeoutSeconds=404&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16] <==
	Trace[1639001552]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:41:16.655)
	Trace[1639001552]: [10.001488144s] [10.001488144s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40460->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40460->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de] <==
	[INFO] 10.244.1.2:52489 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001893734s
	[INFO] 10.244.0.4:58770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111417s
	[INFO] 10.244.0.4:32786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159712s
	[INFO] 10.244.0.4:34773 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133937s
	[INFO] 10.244.0.4:34211 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003320974s
	[INFO] 10.244.0.4:44413 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105874s
	[INFO] 10.244.0.4:37795 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067103s
	[INFO] 10.244.3.2:48365 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108129s
	[INFO] 10.244.3.2:35563 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101277s
	[INFO] 10.244.1.2:41209 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111152s
	[INFO] 10.244.1.2:59241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195927s
	[INFO] 10.244.1.2:32916 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097287s
	[INFO] 10.244.0.4:53548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104877s
	[INFO] 10.244.0.4:55650 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105726s
	[INFO] 10.244.3.2:40741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204087s
	[INFO] 10.244.3.2:41373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105987s
	[INFO] 10.244.1.2:57537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000193166s
	[INFO] 10.244.1.2:40497 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080869s
	[INFO] 10.244.0.4:33281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136165s
	[INFO] 10.244.0.4:49164 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000302537s
	[INFO] 10.244.3.2:54372 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157216s
	[INFO] 10.244.1.2:40968 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206142s
	[INFO] 10.244.1.2:54797 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102712s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-503856
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_30_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:30:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:43:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:41:45 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:41:45 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:41:45 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:41:45 +0000   Mon, 19 Aug 2024 11:30:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-503856
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebf7fa993760403a8b3080e5ea2bdf31
	  System UUID:                ebf7fa99-3760-403a-8b30-80e5ea2bdf31
	  Boot ID:                    f3b2611c-5dfd-45ef-8747-94b35364374b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7wpbx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-2jdlw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-5dbrz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-503856                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-st2mx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-503856             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-503856    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-d6zw9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-503856             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-503856                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 2m11s                 kube-proxy       
	  Normal   Starting                 13m                   kube-proxy       
	  Normal   Starting                 13m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                   kubelet          Node ha-503856 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                   kubelet          Node ha-503856 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                   kubelet          Node ha-503856 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                   node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal   NodeReady                13m                   kubelet          Node ha-503856 status is now: NodeReady
	  Normal   RegisteredNode           12m                   node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal   RegisteredNode           11m                   node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Warning  ContainerGCFailed        3m54s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m15s (x3 over 4m4s)  kubelet          Node ha-503856 status is now: NodeNotReady
	  Normal   RegisteredNode           2m15s                 node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal   RegisteredNode           2m12s                 node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal   RegisteredNode           37s                   node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	
	
	Name:               ha-503856-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_30_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:30:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:43:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:42:29 +0000   Mon, 19 Aug 2024 11:41:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:42:29 +0000   Mon, 19 Aug 2024 11:41:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:42:29 +0000   Mon, 19 Aug 2024 11:41:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:42:29 +0000   Mon, 19 Aug 2024 11:41:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-503856-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a5c9c65d0cb479397609eb1cad01b44
	  System UUID:                9a5c9c65-d0cb-4793-9760-9eb1cad01b44
	  Boot ID:                    61f3472a-e7db-4faa-8ee3-b445e7a5d07f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nxhq6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-503856-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-rnjwj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-503856-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-503856-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-j2f6h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-503856-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-503856-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 2m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-503856-m02 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-503856-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-503856-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-503856-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                    node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  NodeNotReady             9m29s                  node-controller  Node ha-503856-m02 status is now: NodeNotReady
	  Normal  Starting                 2m39s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m38s (x8 over 2m39s)  kubelet          Node ha-503856-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m38s (x8 over 2m39s)  kubelet          Node ha-503856-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m38s (x7 over 2m39s)  kubelet          Node ha-503856-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           2m12s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           37s                    node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	
	
	Name:               ha-503856-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_32_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:32:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:43:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:43:38 +0000   Mon, 19 Aug 2024 11:43:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:43:38 +0000   Mon, 19 Aug 2024 11:43:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:43:38 +0000   Mon, 19 Aug 2024 11:43:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:43:38 +0000   Mon, 19 Aug 2024 11:43:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    ha-503856-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d357d9a38d274836bfe734b86d4bde83
	  System UUID:                d357d9a3-8d27-4836-bfe7-34b86d4bde83
	  Boot ID:                    867bef32-abf5-4bf5-893d-d8a927ee6880
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbmlj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-503856-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-hvszk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-503856-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-503856-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-8xzr9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-503856-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-503856-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 33s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     11m                cidrAllocator    Node ha-503856-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-503856-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-503856-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-503856-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal   RegisteredNode           2m12s              node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	  Normal   NodeNotReady             94s                node-controller  Node ha-503856-m03 status is now: NodeNotReady
	  Normal   Starting                 52s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  52s (x2 over 52s)  kubelet          Node ha-503856-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    52s (x2 over 52s)  kubelet          Node ha-503856-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s (x2 over 52s)  kubelet          Node ha-503856-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 52s                kubelet          Node ha-503856-m03 has been rebooted, boot id: 867bef32-abf5-4bf5-893d-d8a927ee6880
	  Normal   NodeReady                52s                kubelet          Node ha-503856-m03 status is now: NodeReady
	  Normal   RegisteredNode           37s                node-controller  Node ha-503856-m03 event: Registered Node ha-503856-m03 in Controller
	
	
	Name:               ha-503856-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_33_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:33:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:43:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:43:50 +0000   Mon, 19 Aug 2024 11:43:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:43:50 +0000   Mon, 19 Aug 2024 11:43:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:43:50 +0000   Mon, 19 Aug 2024 11:43:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:43:50 +0000   Mon, 19 Aug 2024 11:43:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-503856-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fb3b2ab1e7b42139f0ea868d31218ff
	  System UUID:                9fb3b2ab-1e7b-4213-9f0e-a868d31218ff
	  Boot ID:                    89162d9e-d5e5-43be-8fc5-f8d1a501012c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h29sh       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-4kpcq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-503856-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-503856-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-503856-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-503856-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-503856-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   RegisteredNode           2m12s              node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   NodeNotReady             95s                node-controller  Node ha-503856-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           37s                node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                 kubelet          Node ha-503856-m04 has been rebooted, boot id: 89162d9e-d5e5-43be-8fc5-f8d1a501012c
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-503856-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-503856-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-503856-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9s                 kubelet          Node ha-503856-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.042926] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.060482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062102] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.195986] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.137965] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.282518] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.003020] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.667712] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.056031] kauditd_printk_skb: 158 callbacks suppressed
	[Aug19 11:30] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.046565] kauditd_printk_skb: 60 callbacks suppressed
	[Aug19 11:31] kauditd_printk_skb: 24 callbacks suppressed
	[Aug19 11:40] systemd-fstab-generator[3466]: Ignoring "noauto" option for root device
	[  +0.145428] systemd-fstab-generator[3478]: Ignoring "noauto" option for root device
	[  +0.172464] systemd-fstab-generator[3492]: Ignoring "noauto" option for root device
	[  +0.149641] systemd-fstab-generator[3504]: Ignoring "noauto" option for root device
	[  +0.301323] systemd-fstab-generator[3532]: Ignoring "noauto" option for root device
	[  +4.821377] systemd-fstab-generator[3632]: Ignoring "noauto" option for root device
	[  +0.087417] kauditd_printk_skb: 100 callbacks suppressed
	[Aug19 11:41] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.969806] kauditd_printk_skb: 65 callbacks suppressed
	[ +10.054793] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.863278] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a] <==
	{"level":"info","ts":"2024-08-19T11:39:19.783922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.784093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.784131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe received MsgPreVoteResp from 6b93c4bc4617b0fe at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.784167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe [logterm: 2, index: 2188] sent MsgPreVote request to a2b83f2dcb1ed0d at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.784193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe [logterm: 2, index: 2188] sent MsgPreVote request to 4ad1f16cda0ec14b at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.834466Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b93c4bc4617b0fe","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T11:39:19.834727Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.834796Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.834836Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835054Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835173Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835279Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835336Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835363Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835436Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835502Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835587Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835691Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835804Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835878Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.839726Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"warn","ts":"2024-08-19T11:39:19.839763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.002045175s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-19T11:39:19.839878Z","caller":"traceutil/trace.go:171","msg":"trace[1982300054] range","detail":"{range_begin:; range_end:; }","duration":"9.002180063s","start":"2024-08-19T11:39:10.837688Z","end":"2024-08-19T11:39:19.839868Z","steps":["trace[1982300054] 'agreement among raft nodes before linearized reading'  (duration: 9.002043282s)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:39:19.839928Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-08-19T11:39:19.839979Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-503856","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.102:2380"],"advertise-client-urls":["https://192.168.39.102:2379"]}
	
	
	==> etcd [9e253084db077adff3d7dc910bd0fc7c7d6eb0d8b1b91bcd1ebce47ff183cf7c] <==
	{"level":"warn","ts":"2024-08-19T11:43:04.165417Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4ad1f16cda0ec14b","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:04.165524Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4ad1f16cda0ec14b","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:07.695950Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.122:2380/version","remote-member-id":"4ad1f16cda0ec14b","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:07.696112Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4ad1f16cda0ec14b","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:09.165728Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4ad1f16cda0ec14b","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:09.165751Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4ad1f16cda0ec14b","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:11.698311Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.122:2380/version","remote-member-id":"4ad1f16cda0ec14b","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:11.698400Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4ad1f16cda0ec14b","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:14.166337Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4ad1f16cda0ec14b","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T11:43:14.166509Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4ad1f16cda0ec14b","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-19T11:43:14.793780Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:43:14.793901Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:43:14.797201Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:43:14.808814Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b93c4bc4617b0fe","to":"4ad1f16cda0ec14b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T11:43:14.808937Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:43:14.814811Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b93c4bc4617b0fe","to":"4ad1f16cda0ec14b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T11:43:14.814959Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:43:16.301643Z","caller":"traceutil/trace.go:171","msg":"trace[535509330] transaction","detail":"{read_only:false; response_revision:2425; number_of_response:1; }","duration":"121.698974ms","start":"2024-08-19T11:43:16.179925Z","end":"2024-08-19T11:43:16.301624Z","steps":["trace[535509330] 'process raft request'  (duration: 121.54084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:43:54.140116Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"4ad1f16cda0ec14b","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"51.684791ms"}
	{"level":"warn","ts":"2024-08-19T11:43:54.140245Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a2b83f2dcb1ed0d","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"51.818635ms"}
	{"level":"info","ts":"2024-08-19T11:43:54.142717Z","caller":"traceutil/trace.go:171","msg":"trace[987334433] linearizableReadLoop","detail":"{readStateIndex:3009; appliedIndex:3010; }","duration":"140.737171ms","start":"2024-08-19T11:43:54.001897Z","end":"2024-08-19T11:43:54.142634Z","steps":["trace[987334433] 'read index received'  (duration: 140.730843ms)","trace[987334433] 'applied index is now lower than readState.Index'  (duration: 5.313µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:43:54.144022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.12035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-503856-m03\" ","response":"range_response_count:1 size:5879"}
	{"level":"info","ts":"2024-08-19T11:43:54.144164Z","caller":"traceutil/trace.go:171","msg":"trace[999833495] range","detail":"{range_begin:/registry/minions/ha-503856-m03; range_end:; response_count:1; response_revision:2576; }","duration":"142.257507ms","start":"2024-08-19T11:43:54.001892Z","end":"2024-08-19T11:43:54.144150Z","steps":["trace[999833495] 'agreement among raft nodes before linearized reading'  (duration: 140.965938ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:43:54.144422Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.277833ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:43:54.145149Z","caller":"traceutil/trace.go:171","msg":"trace[1643953805] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2576; }","duration":"140.503742ms","start":"2024-08-19T11:43:54.004634Z","end":"2024-08-19T11:43:54.145137Z","steps":["trace[1643953805] 'agreement among raft nodes before linearized reading'  (duration: 138.826694ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:43:59 up 14 min,  0 users,  load average: 0.51, 0.51, 0.32
	Linux ha-503856 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50] <==
	I0819 11:38:43.533431       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:38:53.530883       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:38:53.530926       1 main.go:299] handling current node
	I0819 11:38:53.530945       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:38:53.530953       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:38:53.531160       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:38:53.531188       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:38:53.531272       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:38:53.531296       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:39:03.530516       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:39:03.530656       1 main.go:299] handling current node
	I0819 11:39:03.530688       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:39:03.530744       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:39:03.530936       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:39:03.530979       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:39:03.531136       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:39:03.531180       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:39:13.531184       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:39:13.531286       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:39:13.531424       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:39:13.531446       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:39:13.531528       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:39:13.531547       1 main.go:299] handling current node
	I0819 11:39:13.531568       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:39:13.531583       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe] <==
	I0819 11:43:24.751133       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:43:34.753924       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:43:34.754029       1 main.go:299] handling current node
	I0819 11:43:34.754103       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:43:34.754129       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:43:34.754273       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:43:34.754294       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:43:34.754393       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:43:34.754423       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:43:44.748914       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:43:44.749138       1 main.go:299] handling current node
	I0819 11:43:44.749191       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:43:44.749224       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:43:44.749417       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:43:44.749458       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:43:44.749555       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:43:44.749584       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:43:54.751030       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:43:54.751324       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:43:54.751519       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:43:54.751570       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:43:54.751725       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:43:54.751753       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:43:54.751813       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:43:54.751836       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4] <==
	I0819 11:41:42.847734       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0819 11:41:42.847877       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 11:41:42.904325       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 11:41:42.917098       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 11:41:42.917138       1 policy_source.go:224] refreshing policies
	I0819 11:41:42.923333       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 11:41:42.923439       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 11:41:42.923557       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 11:41:42.923592       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 11:41:42.923631       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 11:41:42.923661       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 11:41:42.926656       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 11:41:42.928953       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0819 11:41:42.940394       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.122]
	I0819 11:41:42.943031       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 11:41:42.948220       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 11:41:42.948410       1 aggregator.go:171] initial CRD sync complete...
	I0819 11:41:42.948473       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 11:41:42.948497       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 11:41:42.948580       1 cache.go:39] Caches are synced for autoregister controller
	I0819 11:41:42.953187       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 11:41:42.961676       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 11:41:42.991210       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 11:41:43.829426       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 11:41:44.275662       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.122 192.168.39.183]
	
	
	==> kube-apiserver [db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543] <==
	I0819 11:41:03.923339       1 options.go:228] external host was not specified, using 192.168.39.102
	I0819 11:41:03.931607       1 server.go:142] Version: v1.31.0
	I0819 11:41:03.931667       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:41:04.500649       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 11:41:04.504133       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 11:41:04.505619       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 11:41:04.505652       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 11:41:04.505838       1 instance.go:232] Using reconciler: lease
	W0819 11:41:24.500622       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0819 11:41:24.500688       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0819 11:41:24.507493       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0819 11:41:24.507573       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033] <==
	I0819 11:42:24.959263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	I0819 11:42:24.982012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	I0819 11:42:24.992390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:42:25.142293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.498492ms"
	I0819 11:42:25.143512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="134.342µs"
	I0819 11:42:27.356002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	I0819 11:42:29.419563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m02"
	I0819 11:42:30.197938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:42:34.059868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="42.400831ms"
	I0819 11:42:34.059992       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="78.065µs"
	I0819 11:42:37.434190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:42:40.281806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	I0819 11:43:07.683304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	I0819 11:43:07.701520       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	I0819 11:43:08.530189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.03µs"
	I0819 11:43:10.183315       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	I0819 11:43:22.173702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:43:22.237838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:43:27.829609       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.418142ms"
	I0819 11:43:27.829853       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.974µs"
	I0819 11:43:37.996763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	I0819 11:43:50.822989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-503856-m04"
	I0819 11:43:50.823313       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:43:50.838922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:43:52.191825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	
	
	==> kube-controller-manager [bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d] <==
	I0819 11:41:04.713046       1 serving.go:386] Generated self-signed cert in-memory
	I0819 11:41:04.976266       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 11:41:04.976307       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:41:04.978945       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 11:41:04.979618       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 11:41:04.979752       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 11:41:04.979826       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0819 11:41:25.514043       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.102:8443/healthz\": dial tcp 192.168.39.102:8443: connect: connection refused"
	
	
	==> kube-proxy [4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 11:41:07.775872       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 11:41:10.848120       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 11:41:13.919575       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 11:41:20.063786       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 11:41:29.280098       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 11:41:47.629754       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E0819 11:41:47.629925       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:41:47.669393       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 11:41:47.669440       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 11:41:47.669470       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:41:47.671873       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:41:47.672151       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:41:47.672175       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:41:47.673705       1 config.go:197] "Starting service config controller"
	I0819 11:41:47.673743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:41:47.673764       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:41:47.673768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:41:47.674225       1 config.go:326] "Starting node config controller"
	I0819 11:41:47.674248       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:41:47.774539       1 shared_informer.go:320] Caches are synced for node config
	I0819 11:41:47.774556       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:41:47.774569       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3] <==
	E0819 11:38:15.745903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:15.746110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:15.746234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:18.815500       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:18.815957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:18.816701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:18.817349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:21.888843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:21.888953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:24.961206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0819 11:38:24.961249       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:24.961514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0819 11:38:24.961386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:34.176547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:34.176654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:34.176693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:34.176752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:37.248697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:37.249354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:49.536621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:49.536971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:55.680723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:55.680810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:55.680756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:55.680968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [b4d4ca83f34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384] <==
	W0819 11:41:33.379679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.102:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:33.379735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.102:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:33.486745       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:33.486814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.102:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:33.696255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.102:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:33.696318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.102:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:34.705908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:34.706026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.102:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:34.833939       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:34.834014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:34.834559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:34.834623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.102:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:34.896697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:34.896790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:35.459416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.102:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:35.459494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.102:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:40.322581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.102:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:40.322768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.102:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:42.853678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:41:42.853839       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:41:42.854030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:41:42.854176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:41:42.856550       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:41:42.856642       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 11:42:00.528143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674] <==
	W0819 11:30:01.659419       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:30:01.660325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 11:30:03.075764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 11:33:11.218857       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-h29sh\": pod kindnet-h29sh is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-h29sh" node="ha-503856-m04"
	E0819 11:33:11.219015       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-h29sh\": pod kindnet-h29sh is already assigned to node \"ha-503856-m04\"" pod="kube-system/kindnet-h29sh"
	E0819 11:33:11.221900       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4kpcq\": pod kube-proxy-4kpcq is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4kpcq" node="ha-503856-m04"
	E0819 11:33:11.221962       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f038ca5-2e98-4126-9959-f24f6ab3a802(kube-system/kube-proxy-4kpcq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4kpcq"
	E0819 11:33:11.221977       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4kpcq\": pod kube-proxy-4kpcq is already assigned to node \"ha-503856-m04\"" pod="kube-system/kube-proxy-4kpcq"
	I0819 11:33:11.222009       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4kpcq" node="ha-503856-m04"
	E0819 11:33:11.260369       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5zzk5\": pod kube-proxy-5zzk5 is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5zzk5" node="ha-503856-m04"
	E0819 11:33:11.260439       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 29216c29-6ceb-411d-a714-c94d674aed3f(kube-system/kube-proxy-5zzk5) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5zzk5"
	E0819 11:33:11.260454       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5zzk5\": pod kube-proxy-5zzk5 is already assigned to node \"ha-503856-m04\"" pod="kube-system/kube-proxy-5zzk5"
	I0819 11:33:11.260471       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5zzk5" node="ha-503856-m04"
	E0819 11:39:11.384170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0819 11:39:11.514012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0819 11:39:15.748727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 11:39:17.406249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 11:39:17.824460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0819 11:39:17.971967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0819 11:39:18.330384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 11:39:18.761341       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0819 11:39:18.769619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 11:39:18.802761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 11:39:19.342163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 11:39:19.666559       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 11:42:36 ha-503856 kubelet[1331]: E0819 11:42:36.096020    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067756095792383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:42:36 ha-503856 kubelet[1331]: E0819 11:42:36.096467    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067756095792383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:42:36 ha-503856 kubelet[1331]: I0819 11:42:36.882552    1331 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-503856" podUID="a184b6bf-9e5f-40a1-a3f8-5b97ce4cd6b8"
	Aug 19 11:42:36 ha-503856 kubelet[1331]: I0819 11:42:36.900773    1331 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-503856"
	Aug 19 11:42:46 ha-503856 kubelet[1331]: E0819 11:42:46.098699    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067766098259106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:42:46 ha-503856 kubelet[1331]: E0819 11:42:46.098742    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067766098259106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:42:56 ha-503856 kubelet[1331]: E0819 11:42:56.105136    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067776100682145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:42:56 ha-503856 kubelet[1331]: E0819 11:42:56.105476    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067776100682145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:05 ha-503856 kubelet[1331]: E0819 11:43:05.896650    1331 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 11:43:05 ha-503856 kubelet[1331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 11:43:05 ha-503856 kubelet[1331]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 11:43:05 ha-503856 kubelet[1331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 11:43:05 ha-503856 kubelet[1331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 11:43:06 ha-503856 kubelet[1331]: E0819 11:43:06.107410    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067786106475162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:06 ha-503856 kubelet[1331]: E0819 11:43:06.107433    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067786106475162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:16 ha-503856 kubelet[1331]: E0819 11:43:16.110785    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067796109889375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:16 ha-503856 kubelet[1331]: E0819 11:43:16.111264    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067796109889375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:26 ha-503856 kubelet[1331]: E0819 11:43:26.112888    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067806112312072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:26 ha-503856 kubelet[1331]: E0819 11:43:26.113212    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067806112312072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:36 ha-503856 kubelet[1331]: E0819 11:43:36.117555    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067816116429268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:36 ha-503856 kubelet[1331]: E0819 11:43:36.117584    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067816116429268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:46 ha-503856 kubelet[1331]: E0819 11:43:46.119627    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067826119275126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:46 ha-503856 kubelet[1331]: E0819 11:43:46.119703    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067826119275126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:56 ha-503856 kubelet[1331]: E0819 11:43:56.121417    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067836121056577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:43:56 ha-503856 kubelet[1331]: E0819 11:43:56.121457    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067836121056577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:43:58.726564  129046 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19476-99410/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-503856 -n ha-503856
helpers_test.go:261: (dbg) Run:  kubectl --context ha-503856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (403.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 stop -v=7 --alsologtostderr: exit status 82 (2m0.479165096s)

                                                
                                                
-- stdout --
	* Stopping node "ha-503856-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:44:17.706349  129459 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:44:17.706475  129459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:44:17.706485  129459 out.go:358] Setting ErrFile to fd 2...
	I0819 11:44:17.706489  129459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:44:17.706656  129459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:44:17.706904  129459 out.go:352] Setting JSON to false
	I0819 11:44:17.706980  129459 mustload.go:65] Loading cluster: ha-503856
	I0819 11:44:17.707331  129459 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:44:17.707446  129459 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:44:17.707624  129459 mustload.go:65] Loading cluster: ha-503856
	I0819 11:44:17.707789  129459 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:44:17.707816  129459 stop.go:39] StopHost: ha-503856-m04
	I0819 11:44:17.708230  129459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:44:17.708272  129459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:44:17.724186  129459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41033
	I0819 11:44:17.724708  129459 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:44:17.725283  129459 main.go:141] libmachine: Using API Version  1
	I0819 11:44:17.725314  129459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:44:17.725697  129459 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:44:17.727749  129459 out.go:177] * Stopping node "ha-503856-m04"  ...
	I0819 11:44:17.729087  129459 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 11:44:17.729130  129459 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:44:17.729400  129459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 11:44:17.729422  129459 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:44:17.732296  129459 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:44:17.732740  129459 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:43:45 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:44:17.732764  129459 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:44:17.732905  129459 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:44:17.733073  129459 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:44:17.733225  129459 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:44:17.733357  129459 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	I0819 11:44:17.817544  129459 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 11:44:17.869841  129459 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 11:44:17.922268  129459 main.go:141] libmachine: Stopping "ha-503856-m04"...
	I0819 11:44:17.922300  129459 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:44:17.923985  129459 main.go:141] libmachine: (ha-503856-m04) Calling .Stop
	I0819 11:44:17.927640  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 0/120
	I0819 11:44:18.929906  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 1/120
	I0819 11:44:19.931766  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 2/120
	I0819 11:44:20.933515  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 3/120
	I0819 11:44:21.935227  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 4/120
	I0819 11:44:22.937028  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 5/120
	I0819 11:44:23.938523  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 6/120
	I0819 11:44:24.940024  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 7/120
	I0819 11:44:25.942497  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 8/120
	I0819 11:44:26.944041  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 9/120
	I0819 11:44:27.945568  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 10/120
	I0819 11:44:28.947389  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 11/120
	I0819 11:44:29.948898  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 12/120
	I0819 11:44:30.950609  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 13/120
	I0819 11:44:31.952275  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 14/120
	I0819 11:44:32.954531  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 15/120
	I0819 11:44:33.956233  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 16/120
	I0819 11:44:34.958280  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 17/120
	I0819 11:44:35.959774  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 18/120
	I0819 11:44:36.961619  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 19/120
	I0819 11:44:37.963961  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 20/120
	I0819 11:44:38.965382  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 21/120
	I0819 11:44:39.966852  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 22/120
	I0819 11:44:40.968427  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 23/120
	I0819 11:44:41.970672  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 24/120
	I0819 11:44:42.972758  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 25/120
	I0819 11:44:43.974123  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 26/120
	I0819 11:44:44.975525  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 27/120
	I0819 11:44:45.977171  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 28/120
	I0819 11:44:46.978684  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 29/120
	I0819 11:44:47.980979  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 30/120
	I0819 11:44:48.982516  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 31/120
	I0819 11:44:49.985042  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 32/120
	I0819 11:44:50.986297  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 33/120
	I0819 11:44:51.987886  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 34/120
	I0819 11:44:52.990079  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 35/120
	I0819 11:44:53.991811  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 36/120
	I0819 11:44:54.993292  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 37/120
	I0819 11:44:55.994743  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 38/120
	I0819 11:44:56.996168  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 39/120
	I0819 11:44:57.998457  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 40/120
	I0819 11:44:59.000211  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 41/120
	I0819 11:45:00.001872  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 42/120
	I0819 11:45:01.003241  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 43/120
	I0819 11:45:02.004750  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 44/120
	I0819 11:45:03.006693  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 45/120
	I0819 11:45:04.008619  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 46/120
	I0819 11:45:05.010432  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 47/120
	I0819 11:45:06.011964  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 48/120
	I0819 11:45:07.014330  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 49/120
	I0819 11:45:08.016845  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 50/120
	I0819 11:45:09.018579  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 51/120
	I0819 11:45:10.020836  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 52/120
	I0819 11:45:11.023070  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 53/120
	I0819 11:45:12.024776  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 54/120
	I0819 11:45:13.026945  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 55/120
	I0819 11:45:14.028388  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 56/120
	I0819 11:45:15.030432  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 57/120
	I0819 11:45:16.031919  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 58/120
	I0819 11:45:17.033340  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 59/120
	I0819 11:45:18.035643  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 60/120
	I0819 11:45:19.037002  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 61/120
	I0819 11:45:20.038767  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 62/120
	I0819 11:45:21.040957  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 63/120
	I0819 11:45:22.042293  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 64/120
	I0819 11:45:23.044434  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 65/120
	I0819 11:45:24.046267  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 66/120
	I0819 11:45:25.047564  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 67/120
	I0819 11:45:26.049068  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 68/120
	I0819 11:45:27.050619  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 69/120
	I0819 11:45:28.052530  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 70/120
	I0819 11:45:29.054274  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 71/120
	I0819 11:45:30.055591  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 72/120
	I0819 11:45:31.056913  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 73/120
	I0819 11:45:32.058385  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 74/120
	I0819 11:45:33.060670  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 75/120
	I0819 11:45:34.062405  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 76/120
	I0819 11:45:35.064092  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 77/120
	I0819 11:45:36.065418  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 78/120
	I0819 11:45:37.066654  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 79/120
	I0819 11:45:38.068125  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 80/120
	I0819 11:45:39.070330  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 81/120
	I0819 11:45:40.071841  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 82/120
	I0819 11:45:41.073287  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 83/120
	I0819 11:45:42.074623  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 84/120
	I0819 11:45:43.076745  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 85/120
	I0819 11:45:44.078148  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 86/120
	I0819 11:45:45.079366  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 87/120
	I0819 11:45:46.080914  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 88/120
	I0819 11:45:47.082383  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 89/120
	I0819 11:45:48.084646  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 90/120
	I0819 11:45:49.086355  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 91/120
	I0819 11:45:50.087638  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 92/120
	I0819 11:45:51.089021  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 93/120
	I0819 11:45:52.090605  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 94/120
	I0819 11:45:53.092823  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 95/120
	I0819 11:45:54.094185  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 96/120
	I0819 11:45:55.095497  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 97/120
	I0819 11:45:56.097151  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 98/120
	I0819 11:45:57.098330  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 99/120
	I0819 11:45:58.100493  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 100/120
	I0819 11:45:59.101958  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 101/120
	I0819 11:46:00.103566  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 102/120
	I0819 11:46:01.104790  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 103/120
	I0819 11:46:02.106462  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 104/120
	I0819 11:46:03.108629  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 105/120
	I0819 11:46:04.110114  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 106/120
	I0819 11:46:05.111676  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 107/120
	I0819 11:46:06.113645  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 108/120
	I0819 11:46:07.115920  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 109/120
	I0819 11:46:08.118124  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 110/120
	I0819 11:46:09.119622  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 111/120
	I0819 11:46:10.121165  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 112/120
	I0819 11:46:11.122515  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 113/120
	I0819 11:46:12.124176  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 114/120
	I0819 11:46:13.126185  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 115/120
	I0819 11:46:14.127425  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 116/120
	I0819 11:46:15.128939  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 117/120
	I0819 11:46:16.131121  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 118/120
	I0819 11:46:17.132589  129459 main.go:141] libmachine: (ha-503856-m04) Waiting for machine to stop 119/120
	I0819 11:46:18.133495  129459 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 11:46:18.133557  129459 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 11:46:18.135504  129459 out.go:201] 
	W0819 11:46:18.136756  129459 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 11:46:18.136774  129459 out.go:270] * 
	* 
	W0819 11:46:18.139112  129459 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:46:18.140168  129459 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-503856 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr: exit status 3 (18.863432812s)

                                                
                                                
-- stdout --
	ha-503856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503856-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:46:18.189626  129908 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:46:18.189884  129908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:46:18.189892  129908 out.go:358] Setting ErrFile to fd 2...
	I0819 11:46:18.189896  129908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:46:18.190061  129908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:46:18.190236  129908 out.go:352] Setting JSON to false
	I0819 11:46:18.190261  129908 mustload.go:65] Loading cluster: ha-503856
	I0819 11:46:18.190333  129908 notify.go:220] Checking for updates...
	I0819 11:46:18.190627  129908 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:46:18.190643  129908 status.go:255] checking status of ha-503856 ...
	I0819 11:46:18.191128  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.191205  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.214704  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0819 11:46:18.215274  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.215967  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.215988  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.216445  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.216707  129908 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:46:18.218581  129908 status.go:330] ha-503856 host status = "Running" (err=<nil>)
	I0819 11:46:18.218599  129908 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:46:18.218944  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.218996  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.235167  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36697
	I0819 11:46:18.235668  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.236302  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.236343  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.236696  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.236906  129908 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:46:18.240243  129908 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:46:18.240729  129908 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:46:18.240764  129908 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:46:18.240937  129908 host.go:66] Checking if "ha-503856" exists ...
	I0819 11:46:18.241284  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.241327  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.257184  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0819 11:46:18.257626  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.258095  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.258119  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.258476  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.258660  129908 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:46:18.259038  129908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:46:18.259067  129908 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:46:18.262100  129908 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:46:18.262586  129908 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:46:18.262614  129908 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:46:18.262797  129908 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:46:18.262965  129908 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:46:18.263142  129908 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:46:18.263323  129908 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:46:18.343566  129908 ssh_runner.go:195] Run: systemctl --version
	I0819 11:46:18.349599  129908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:46:18.364308  129908 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:46:18.364346  129908 api_server.go:166] Checking apiserver status ...
	I0819 11:46:18.364381  129908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:46:18.378258  129908 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4823/cgroup
	W0819 11:46:18.388900  129908 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4823/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:46:18.388963  129908 ssh_runner.go:195] Run: ls
	I0819 11:46:18.393943  129908 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:46:18.398965  129908 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:46:18.398998  129908 status.go:422] ha-503856 apiserver status = Running (err=<nil>)
	I0819 11:46:18.399011  129908 status.go:257] ha-503856 status: &{Name:ha-503856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:46:18.399030  129908 status.go:255] checking status of ha-503856-m02 ...
	I0819 11:46:18.399334  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.399359  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.414522  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0819 11:46:18.414953  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.415574  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.415596  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.416113  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.416312  129908 main.go:141] libmachine: (ha-503856-m02) Calling .GetState
	I0819 11:46:18.417965  129908 status.go:330] ha-503856-m02 host status = "Running" (err=<nil>)
	I0819 11:46:18.417986  129908 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:46:18.418261  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.418285  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.433379  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0819 11:46:18.433921  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.434454  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.434479  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.434814  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.435024  129908 main.go:141] libmachine: (ha-503856-m02) Calling .GetIP
	I0819 11:46:18.437910  129908 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:46:18.438362  129908 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:41:08 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:46:18.438395  129908 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:46:18.438585  129908 host.go:66] Checking if "ha-503856-m02" exists ...
	I0819 11:46:18.438896  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.438963  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.454257  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0819 11:46:18.454714  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.455211  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.455242  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.455542  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.455706  129908 main.go:141] libmachine: (ha-503856-m02) Calling .DriverName
	I0819 11:46:18.455885  129908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:46:18.455909  129908 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHHostname
	I0819 11:46:18.459040  129908 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:46:18.459453  129908 main.go:141] libmachine: (ha-503856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:a0:c4", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:41:08 +0000 UTC Type:0 Mac:52:54:00:f7:a0:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-503856-m02 Clientid:01:52:54:00:f7:a0:c4}
	I0819 11:46:18.459494  129908 main.go:141] libmachine: (ha-503856-m02) DBG | domain ha-503856-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:f7:a0:c4 in network mk-ha-503856
	I0819 11:46:18.459655  129908 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHPort
	I0819 11:46:18.459869  129908 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHKeyPath
	I0819 11:46:18.460062  129908 main.go:141] libmachine: (ha-503856-m02) Calling .GetSSHUsername
	I0819 11:46:18.460224  129908 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m02/id_rsa Username:docker}
	I0819 11:46:18.539491  129908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:46:18.554973  129908 kubeconfig.go:125] found "ha-503856" server: "https://192.168.39.254:8443"
	I0819 11:46:18.555005  129908 api_server.go:166] Checking apiserver status ...
	I0819 11:46:18.555040  129908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:46:18.568939  129908 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	W0819 11:46:18.579953  129908 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:46:18.580014  129908 ssh_runner.go:195] Run: ls
	I0819 11:46:18.584291  129908 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 11:46:18.588710  129908 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 11:46:18.588742  129908 status.go:422] ha-503856-m02 apiserver status = Running (err=<nil>)
	I0819 11:46:18.588750  129908 status.go:257] ha-503856-m02 status: &{Name:ha-503856-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:46:18.588771  129908 status.go:255] checking status of ha-503856-m04 ...
	I0819 11:46:18.589101  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.589130  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.605905  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0819 11:46:18.606389  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.606969  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.606991  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.607333  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.607555  129908 main.go:141] libmachine: (ha-503856-m04) Calling .GetState
	I0819 11:46:18.609491  129908 status.go:330] ha-503856-m04 host status = "Running" (err=<nil>)
	I0819 11:46:18.609514  129908 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:46:18.609881  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.609921  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.625340  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44927
	I0819 11:46:18.625763  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.626215  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.626233  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.626516  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.626673  129908 main.go:141] libmachine: (ha-503856-m04) Calling .GetIP
	I0819 11:46:18.629419  129908 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:46:18.629814  129908 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:43:45 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:46:18.629841  129908 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:46:18.630000  129908 host.go:66] Checking if "ha-503856-m04" exists ...
	I0819 11:46:18.630449  129908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:46:18.630499  129908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:46:18.645653  129908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I0819 11:46:18.646164  129908 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:46:18.646679  129908 main.go:141] libmachine: Using API Version  1
	I0819 11:46:18.646701  129908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:46:18.647047  129908 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:46:18.647235  129908 main.go:141] libmachine: (ha-503856-m04) Calling .DriverName
	I0819 11:46:18.647490  129908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:46:18.647514  129908 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHHostname
	I0819 11:46:18.650650  129908 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:46:18.651076  129908 main.go:141] libmachine: (ha-503856-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d3:72", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:43:45 +0000 UTC Type:0 Mac:52:54:00:56:d3:72 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-503856-m04 Clientid:01:52:54:00:56:d3:72}
	I0819 11:46:18.651108  129908 main.go:141] libmachine: (ha-503856-m04) DBG | domain ha-503856-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:56:d3:72 in network mk-ha-503856
	I0819 11:46:18.651266  129908 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHPort
	I0819 11:46:18.651450  129908 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHKeyPath
	I0819 11:46:18.651614  129908 main.go:141] libmachine: (ha-503856-m04) Calling .GetSSHUsername
	I0819 11:46:18.651788  129908 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856-m04/id_rsa Username:docker}
	W0819 11:46:37.004045  129908 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0819 11:46:37.004199  129908 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0819 11:46:37.004226  129908 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0819 11:46:37.004236  129908 status.go:257] ha-503856-m04 status: &{Name:ha-503856-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0819 11:46:37.004257  129908 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-503856 -n ha-503856
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-503856 logs -n 25: (1.581976931s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-503856 ssh -n ha-503856-m02 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04:/home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m04 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp testdata/cp-test.txt                                                | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856:/home/docker/cp-test_ha-503856-m04_ha-503856.txt                       |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856 sudo cat                                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856.txt                                 |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m02:/home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m02 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m03:/home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n                                                                 | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | ha-503856-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-503856 ssh -n ha-503856-m03 sudo cat                                          | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC | 19 Aug 24 11:33 UTC |
	|         | /home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-503856 node stop m02 -v=7                                                     | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-503856 node start m02 -v=7                                                    | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-503856 -v=7                                                           | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-503856 -v=7                                                                | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-503856 --wait=true -v=7                                                    | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:39 UTC | 19 Aug 24 11:43 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-503856                                                                | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:43 UTC |                     |
	| node    | ha-503856 node delete m03 -v=7                                                   | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:44 UTC | 19 Aug 24 11:44 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-503856 stop -v=7                                                              | ha-503856 | jenkins | v1.33.1 | 19 Aug 24 11:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:39:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:39:18.835403  127629 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:39:18.835548  127629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:39:18.835557  127629 out.go:358] Setting ErrFile to fd 2...
	I0819 11:39:18.835561  127629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:39:18.835776  127629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:39:18.836362  127629 out.go:352] Setting JSON to false
	I0819 11:39:18.837320  127629 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4905,"bootTime":1724062654,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:39:18.837405  127629 start.go:139] virtualization: kvm guest
	I0819 11:39:18.839741  127629 out.go:177] * [ha-503856] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:39:18.841149  127629 notify.go:220] Checking for updates...
	I0819 11:39:18.841168  127629 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:39:18.842417  127629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:39:18.843589  127629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:39:18.844833  127629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:39:18.846140  127629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:39:18.847529  127629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:39:18.849166  127629 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:39:18.849269  127629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:39:18.849703  127629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:39:18.849759  127629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:39:18.865677  127629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0819 11:39:18.866196  127629 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:39:18.866751  127629 main.go:141] libmachine: Using API Version  1
	I0819 11:39:18.866771  127629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:39:18.867096  127629 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:39:18.867305  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:39:18.905566  127629 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 11:39:18.906664  127629 start.go:297] selected driver: kvm2
	I0819 11:39:18.906695  127629 start.go:901] validating driver "kvm2" against &{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:39:18.906948  127629 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:39:18.907463  127629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:39:18.907580  127629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 11:39:18.924433  127629 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 11:39:18.925244  127629 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:39:18.925301  127629 cni.go:84] Creating CNI manager for ""
	I0819 11:39:18.925314  127629 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 11:39:18.925378  127629 start.go:340] cluster config:
	{Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:39:18.925537  127629 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:39:18.927195  127629 out.go:177] * Starting "ha-503856" primary control-plane node in "ha-503856" cluster
	I0819 11:39:18.928386  127629 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:39:18.928432  127629 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 11:39:18.928440  127629 cache.go:56] Caching tarball of preloaded images
	I0819 11:39:18.928546  127629 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 11:39:18.928558  127629 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 11:39:18.928673  127629 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/config.json ...
	I0819 11:39:18.928924  127629 start.go:360] acquireMachinesLock for ha-503856: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:39:18.928968  127629 start.go:364] duration metric: took 25.817µs to acquireMachinesLock for "ha-503856"
	I0819 11:39:18.928985  127629 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:39:18.928990  127629 fix.go:54] fixHost starting: 
	I0819 11:39:18.929239  127629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:39:18.929279  127629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:39:18.944048  127629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43587
	I0819 11:39:18.944467  127629 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:39:18.944960  127629 main.go:141] libmachine: Using API Version  1
	I0819 11:39:18.944987  127629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:39:18.945391  127629 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:39:18.945627  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:39:18.945784  127629 main.go:141] libmachine: (ha-503856) Calling .GetState
	I0819 11:39:18.947504  127629 fix.go:112] recreateIfNeeded on ha-503856: state=Running err=<nil>
	W0819 11:39:18.947528  127629 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:39:18.949220  127629 out.go:177] * Updating the running kvm2 "ha-503856" VM ...
	I0819 11:39:18.950266  127629 machine.go:93] provisionDockerMachine start ...
	I0819 11:39:18.950343  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:39:18.950570  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:18.953405  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:18.953875  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:18.953900  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:18.954107  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:18.954376  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:18.954522  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:18.954787  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:18.955050  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:39:18.955240  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:39:18.955251  127629 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:39:19.060101  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856
	
	I0819 11:39:19.060132  127629 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:39:19.060440  127629 buildroot.go:166] provisioning hostname "ha-503856"
	I0819 11:39:19.060468  127629 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:39:19.060670  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.063532  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.064054  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.064083  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.064285  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:19.064484  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.064669  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.064816  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:19.064994  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:39:19.065217  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:39:19.065235  127629 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-503856 && echo "ha-503856" | sudo tee /etc/hostname
	I0819 11:39:19.187967  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-503856
	
	I0819 11:39:19.187999  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.190894  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.191262  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.191291  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.191495  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:19.191684  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.191864  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.192056  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:19.192257  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:39:19.192460  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:39:19.192478  127629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-503856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-503856/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-503856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:39:19.300156  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:39:19.300186  127629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 11:39:19.300211  127629 buildroot.go:174] setting up certificates
	I0819 11:39:19.300223  127629 provision.go:84] configureAuth start
	I0819 11:39:19.300237  127629 main.go:141] libmachine: (ha-503856) Calling .GetMachineName
	I0819 11:39:19.300535  127629 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:39:19.302711  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.303124  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.303146  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.303383  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.305839  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.306292  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.306320  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.306460  127629 provision.go:143] copyHostCerts
	I0819 11:39:19.306492  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:39:19.306552  127629 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 11:39:19.306572  127629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 11:39:19.306642  127629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 11:39:19.306723  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:39:19.306740  127629 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 11:39:19.306747  127629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 11:39:19.306773  127629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 11:39:19.306816  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:39:19.306832  127629 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 11:39:19.306838  127629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 11:39:19.306858  127629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 11:39:19.306929  127629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.ha-503856 san=[127.0.0.1 192.168.39.102 ha-503856 localhost minikube]
	I0819 11:39:19.392917  127629 provision.go:177] copyRemoteCerts
	I0819 11:39:19.392986  127629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:39:19.393017  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.395971  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.396305  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.396332  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.396594  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:19.396794  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.396956  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:19.397099  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:39:19.481789  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 11:39:19.481872  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:39:19.508192  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 11:39:19.508277  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:39:19.534486  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 11:39:19.534572  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 11:39:19.562207  127629 provision.go:87] duration metric: took 261.970066ms to configureAuth
	I0819 11:39:19.562237  127629 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:39:19.562442  127629 config.go:182] Loaded profile config "ha-503856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:39:19.562558  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:39:19.565685  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.566141  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:39:19.566168  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:39:19.566536  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:39:19.566770  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.566919  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:39:19.567098  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:39:19.567236  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:39:19.567409  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:39:19.567424  127629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 11:40:50.425779  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 11:40:50.425809  127629 machine.go:96] duration metric: took 1m31.475523752s to provisionDockerMachine
	I0819 11:40:50.425836  127629 start.go:293] postStartSetup for "ha-503856" (driver="kvm2")
	I0819 11:40:50.425853  127629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:40:50.425884  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.426290  127629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:40:50.426326  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.429727  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.430290  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.430321  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.430509  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.430721  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.430915  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.431060  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:40:50.514384  127629 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:40:50.518799  127629 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 11:40:50.518826  127629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 11:40:50.518893  127629 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 11:40:50.518974  127629 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 11:40:50.518991  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 11:40:50.519106  127629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:40:50.528289  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:40:50.552292  127629 start.go:296] duration metric: took 126.436353ms for postStartSetup
	I0819 11:40:50.552339  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.552637  127629 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 11:40:50.552667  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.555339  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.555712  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.555750  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.555908  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.556112  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.556313  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.556470  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	W0819 11:40:50.637638  127629 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 11:40:50.637672  127629 fix.go:56] duration metric: took 1m31.708680394s for fixHost
	I0819 11:40:50.637698  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.640384  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.640764  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.640784  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.640965  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.641185  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.641354  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.641496  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.641661  127629 main.go:141] libmachine: Using SSH client type: native
	I0819 11:40:50.641906  127629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0819 11:40:50.641922  127629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:40:50.748525  127629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724067650.701950950
	
	I0819 11:40:50.748553  127629 fix.go:216] guest clock: 1724067650.701950950
	I0819 11:40:50.748564  127629 fix.go:229] Guest: 2024-08-19 11:40:50.70195095 +0000 UTC Remote: 2024-08-19 11:40:50.63768201 +0000 UTC m=+91.842315702 (delta=64.26894ms)
	I0819 11:40:50.748597  127629 fix.go:200] guest clock delta is within tolerance: 64.26894ms
	I0819 11:40:50.748603  127629 start.go:83] releasing machines lock for "ha-503856", held for 1m31.819625092s
	I0819 11:40:50.748621  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.748909  127629 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:40:50.751570  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.751936  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.751971  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.752145  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.752690  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.752874  127629 main.go:141] libmachine: (ha-503856) Calling .DriverName
	I0819 11:40:50.752975  127629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:40:50.753032  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.753082  127629 ssh_runner.go:195] Run: cat /version.json
	I0819 11:40:50.753104  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHHostname
	I0819 11:40:50.755746  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.755776  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.756194  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.756222  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.756248  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:50.756265  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:50.756381  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.756500  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHPort
	I0819 11:40:50.756583  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.756668  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHKeyPath
	I0819 11:40:50.756730  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.756787  127629 main.go:141] libmachine: (ha-503856) Calling .GetSSHUsername
	I0819 11:40:50.756972  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:40:50.756966  127629 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/ha-503856/id_rsa Username:docker}
	I0819 11:40:50.832531  127629 ssh_runner.go:195] Run: systemctl --version
	I0819 11:40:50.857358  127629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 11:40:51.012982  127629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:40:51.018808  127629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:40:51.018891  127629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:40:51.028126  127629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 11:40:51.028310  127629 start.go:495] detecting cgroup driver to use...
	I0819 11:40:51.028396  127629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:40:51.048927  127629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:40:51.065769  127629 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:40:51.065840  127629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:40:51.080863  127629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:40:51.095976  127629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:40:51.252866  127629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:40:51.398292  127629 docker.go:233] disabling docker service ...
	I0819 11:40:51.398374  127629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:40:51.414328  127629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:40:51.428842  127629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:40:51.569966  127629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:40:51.729385  127629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:40:51.744302  127629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:40:51.765536  127629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 11:40:51.765600  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.777569  127629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 11:40:51.777630  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.789034  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.800086  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.811040  127629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:40:51.822108  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.833015  127629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.843687  127629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 11:40:51.855390  127629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:40:51.865315  127629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:40:51.875101  127629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:40:52.021627  127629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 11:40:56.357304  127629 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.335629559s)
	I0819 11:40:56.357350  127629 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 11:40:56.357417  127629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 11:40:56.362242  127629 start.go:563] Will wait 60s for crictl version
	I0819 11:40:56.362311  127629 ssh_runner.go:195] Run: which crictl
	I0819 11:40:56.366061  127629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:40:56.404019  127629 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 11:40:56.404113  127629 ssh_runner.go:195] Run: crio --version
	I0819 11:40:56.433638  127629 ssh_runner.go:195] Run: crio --version
	I0819 11:40:56.463695  127629 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 11:40:56.465071  127629 main.go:141] libmachine: (ha-503856) Calling .GetIP
	I0819 11:40:56.467771  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:56.468167  127629 main.go:141] libmachine: (ha-503856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ab:80", ip: ""} in network mk-ha-503856: {Iface:virbr1 ExpiryTime:2024-08-19 12:29:39 +0000 UTC Type:0 Mac:52:54:00:d1:ab:80 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-503856 Clientid:01:52:54:00:d1:ab:80}
	I0819 11:40:56.468197  127629 main.go:141] libmachine: (ha-503856) DBG | domain ha-503856 has defined IP address 192.168.39.102 and MAC address 52:54:00:d1:ab:80 in network mk-ha-503856
	I0819 11:40:56.468380  127629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 11:40:56.473075  127629 kubeadm.go:883] updating cluster {Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:40:56.473226  127629 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 11:40:56.473280  127629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:40:56.515674  127629 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:40:56.515697  127629 crio.go:433] Images already preloaded, skipping extraction
	I0819 11:40:56.515771  127629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:40:56.549406  127629 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 11:40:56.549434  127629 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:40:56.549447  127629 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.0 crio true true} ...
	I0819 11:40:56.549571  127629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-503856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:40:56.549663  127629 ssh_runner.go:195] Run: crio config
	I0819 11:40:56.599365  127629 cni.go:84] Creating CNI manager for ""
	I0819 11:40:56.599384  127629 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 11:40:56.599393  127629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:40:56.599416  127629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-503856 NodeName:ha-503856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:40:56.599609  127629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-503856"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:40:56.599631  127629 kube-vip.go:115] generating kube-vip config ...
	I0819 11:40:56.599674  127629 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 11:40:56.610772  127629 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 11:40:56.610903  127629 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 11:40:56.610966  127629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:40:56.620701  127629 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:40:56.620779  127629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 11:40:56.630324  127629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 11:40:56.647133  127629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:40:56.663873  127629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 11:40:56.680543  127629 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 11:40:56.698102  127629 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 11:40:56.702422  127629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:40:56.848366  127629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:40:56.863093  127629 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856 for IP: 192.168.39.102
	I0819 11:40:56.863115  127629 certs.go:194] generating shared ca certs ...
	I0819 11:40:56.863132  127629 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:40:56.863291  127629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 11:40:56.863327  127629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 11:40:56.863336  127629 certs.go:256] generating profile certs ...
	I0819 11:40:56.863403  127629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/client.key
	I0819 11:40:56.863430  127629 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.755b656b
	I0819 11:40:56.863445  127629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.755b656b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.183 192.168.39.122 192.168.39.254]
	I0819 11:40:56.942096  127629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.755b656b ...
	I0819 11:40:56.942127  127629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.755b656b: {Name:mk7406fb59f8c51d1cb078d71f623a1983ecfb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:40:56.942291  127629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.755b656b ...
	I0819 11:40:56.942304  127629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.755b656b: {Name:mkab95e730345fbb832383ac7cee88f1454a2308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:40:56.942374  127629 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt.755b656b -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt
	I0819 11:40:56.942527  127629 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key.755b656b -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key
	I0819 11:40:56.942663  127629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key
	I0819 11:40:56.942683  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 11:40:56.942697  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 11:40:56.942708  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 11:40:56.942719  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 11:40:56.942732  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 11:40:56.942742  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 11:40:56.942762  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 11:40:56.942774  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 11:40:56.942823  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 11:40:56.942852  127629 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 11:40:56.942861  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:40:56.942884  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:40:56.942906  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:40:56.942929  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 11:40:56.942965  127629 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 11:40:56.942991  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:40:56.943004  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 11:40:56.943016  127629 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 11:40:56.943588  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:40:56.968838  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:40:56.992873  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:40:57.017654  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:40:57.043055  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 11:40:57.073204  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 11:40:57.097849  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:40:57.142529  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/ha-503856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:40:57.213341  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:40:57.264119  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 11:40:57.301287  127629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 11:40:57.338348  127629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:40:57.364080  127629 ssh_runner.go:195] Run: openssl version
	I0819 11:40:57.369909  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:40:57.382106  127629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:40:57.390351  127629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:40:57.390420  127629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:40:57.403675  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:40:57.426957  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 11:40:57.450316  127629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 11:40:57.462052  127629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 11:40:57.462112  127629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 11:40:57.487389  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 11:40:57.507133  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 11:40:57.539008  127629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 11:40:57.545366  127629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 11:40:57.545427  127629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 11:40:57.554586  127629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:40:57.570725  127629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:40:57.576562  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 11:40:57.584050  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 11:40:57.591568  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 11:40:57.599644  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 11:40:57.609167  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 11:40:57.615684  127629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 11:40:57.622392  127629 kubeadm.go:392] StartCluster: {Name:ha-503856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-503856 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:40:57.622526  127629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 11:40:57.622582  127629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:40:57.674980  127629 cri.go:89] found id: "234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b"
	I0819 11:40:57.675008  127629 cri.go:89] found id: "bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16"
	I0819 11:40:57.675012  127629 cri.go:89] found id: "cc7a981129a72e9a8516ad8f5935ff94bca370deb2b9406a0bd5d1d7b4f2adbc"
	I0819 11:40:57.675014  127629 cri.go:89] found id: "c6dba5fc1adfbc807731637b2922d432cd2b239352bf687cff7bee78b45d9342"
	I0819 11:40:57.675017  127629 cri.go:89] found id: "9d70a071997dc45b95134d59cd17221dc42d56b4b491ef282663f00bf9876fe1"
	I0819 11:40:57.675020  127629 cri.go:89] found id: "6c7867b6691ac22ff04851850c5c61a8f266622c1c39592596364c7fc6e99c39"
	I0819 11:40:57.675023  127629 cri.go:89] found id: "e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de"
	I0819 11:40:57.675026  127629 cri.go:89] found id: "8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464"
	I0819 11:40:57.675028  127629 cri.go:89] found id: "1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50"
	I0819 11:40:57.675032  127629 cri.go:89] found id: "68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3"
	I0819 11:40:57.675047  127629 cri.go:89] found id: "11a47171a5438e39273ec80f4da3583d5a56af5d5de55c977d904283dc19b112"
	I0819 11:40:57.675049  127629 cri.go:89] found id: "ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674"
	I0819 11:40:57.675053  127629 cri.go:89] found id: "3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a"
	I0819 11:40:57.675055  127629 cri.go:89] found id: "c0a1ce45d7b784d0b5b53838e68b5660b6da14d8a1966f23cc6949e4c31ea98e"
	I0819 11:40:57.675060  127629 cri.go:89] found id: "df01b4ed6011abdeb64cada43bcfa1f28ba45ef7e17f2991c3ec8a035d214f2e"
	I0819 11:40:57.675073  127629 cri.go:89] found id: ""
	I0819 11:40:57.675121  127629 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.619714646Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067997619682452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cf782e2-0758-42a7-a723-f2a8ffa8b4a1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.620330299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7846019a-6908-422d-a2d2-e0b0e5d23c3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.620383351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7846019a-6908-422d-a2d2-e0b0e5d23c3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.620860688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa382afea1ef150e804db6767bc2ba83ea34772a4ba82ac8fcf6e82f909e789a,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067739891631252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033,PodSandboxId:03684d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724067704893586022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4,PodSandboxId:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724067700894142300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c49405a138a480ceb145b0c06fa0c19c7e9d8739b6e902e7728ff9edf8773ac,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724067698892354341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14db04765cb31df5016a695bff0a72927c98c131587e975c013da68f3b4a36f1,PodSandboxId:7371637445eb2b64a8e1a3fe4f4176c338d5e58f62c46f68623a43115182d991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067697174958993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bfc361f15572573d2773ebd096cf49964223a5aa1d402102ecc37ecfb1a14,PodSandboxId:c7bf4b267d73728fabdc475b8c3bda405ee0766ebf825ebee802b0fff6f94280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724067677098549114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2609971bbd7c8401e4db81e3ea55d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80,PodSandboxId:0461b7050ad929e00c6b4d08d2a1b22768d5b113605a89f03461fdd36b55fcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724067663642043955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe,PodSandboxId:82794ac70c5b241d166513b9bc0cd6d94f8d4c39869df9c48ed62cbc0a955c04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724067663844109398,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4ca83f
34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384,PodSandboxId:23eac961d8ef87891beee2753a152ae3b108e9854351da0e44d9a6e733bf348b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724067663691206920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e253084db077adff3d7dc910bd0fc
7c7d6eb0d8b1b91bcd1ebce47ff183cf7c,PodSandboxId:eb9d374c74b751a8c3b1af72dadb750590b5950d5abc0f2ef83c9dc2a955eb0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724067663581594688,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d,PodSandboxId:036
84d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724067663575899089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543,PodSandboxI
d:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724067663488713880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b,PodSandboxId:6d5134c5654f95d6f33a
686c2308569408d964927584f53b946162d8b76980cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657399532799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16,PodSandboxId:9737bdaf9902ef57d23d291717c6b0ba89bce621b66977ea9dc7febc9b09e758,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657381769049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724067158500933131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024223183467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024221425951,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724067012620055458,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724067008316858973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724066997299442286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724066997297013428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7846019a-6908-422d-a2d2-e0b0e5d23c3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.667222312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4589083-2fd2-4817-86bd-957e63fd2f95 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.667306070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4589083-2fd2-4817-86bd-957e63fd2f95 name=/runtime.v1.RuntimeService/Version
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.668688997Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c1786df-73d8-4fdc-a2a4-a1bdc79d1bf3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.669272920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067997669245012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c1786df-73d8-4fdc-a2a4-a1bdc79d1bf3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.669785863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb8f5add-52b4-47b7-b9ff-a8b98be99b4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.669869259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb8f5add-52b4-47b7-b9ff-a8b98be99b4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.675369649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa382afea1ef150e804db6767bc2ba83ea34772a4ba82ac8fcf6e82f909e789a,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067739891631252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033,PodSandboxId:03684d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724067704893586022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4,PodSandboxId:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724067700894142300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c49405a138a480ceb145b0c06fa0c19c7e9d8739b6e902e7728ff9edf8773ac,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724067698892354341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14db04765cb31df5016a695bff0a72927c98c131587e975c013da68f3b4a36f1,PodSandboxId:7371637445eb2b64a8e1a3fe4f4176c338d5e58f62c46f68623a43115182d991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067697174958993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bfc361f15572573d2773ebd096cf49964223a5aa1d402102ecc37ecfb1a14,PodSandboxId:c7bf4b267d73728fabdc475b8c3bda405ee0766ebf825ebee802b0fff6f94280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724067677098549114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2609971bbd7c8401e4db81e3ea55d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80,PodSandboxId:0461b7050ad929e00c6b4d08d2a1b22768d5b113605a89f03461fdd36b55fcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724067663642043955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe,PodSandboxId:82794ac70c5b241d166513b9bc0cd6d94f8d4c39869df9c48ed62cbc0a955c04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724067663844109398,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4ca83f
34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384,PodSandboxId:23eac961d8ef87891beee2753a152ae3b108e9854351da0e44d9a6e733bf348b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724067663691206920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e253084db077adff3d7dc910bd0fc
7c7d6eb0d8b1b91bcd1ebce47ff183cf7c,PodSandboxId:eb9d374c74b751a8c3b1af72dadb750590b5950d5abc0f2ef83c9dc2a955eb0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724067663581594688,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d,PodSandboxId:036
84d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724067663575899089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543,PodSandboxI
d:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724067663488713880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b,PodSandboxId:6d5134c5654f95d6f33a
686c2308569408d964927584f53b946162d8b76980cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657399532799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16,PodSandboxId:9737bdaf9902ef57d23d291717c6b0ba89bce621b66977ea9dc7febc9b09e758,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657381769049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724067158500933131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024223183467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024221425951,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724067012620055458,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724067008316858973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724066997299442286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724066997297013428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb8f5add-52b4-47b7-b9ff-a8b98be99b4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.718467645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bfdca16-240a-44b6-a8ca-55450322282f name=/runtime.v1.RuntimeService/Version
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.718560297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bfdca16-240a-44b6-a8ca-55450322282f name=/runtime.v1.RuntimeService/Version
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.719951861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00293e83-a7bd-41e8-83e0-932f053d010b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.720468472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067997720443708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00293e83-a7bd-41e8-83e0-932f053d010b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.720942238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f80859b8-e45c-414e-a94b-c8966c4f82d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.721005154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f80859b8-e45c-414e-a94b-c8966c4f82d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.721623960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa382afea1ef150e804db6767bc2ba83ea34772a4ba82ac8fcf6e82f909e789a,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067739891631252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033,PodSandboxId:03684d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724067704893586022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4,PodSandboxId:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724067700894142300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c49405a138a480ceb145b0c06fa0c19c7e9d8739b6e902e7728ff9edf8773ac,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724067698892354341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14db04765cb31df5016a695bff0a72927c98c131587e975c013da68f3b4a36f1,PodSandboxId:7371637445eb2b64a8e1a3fe4f4176c338d5e58f62c46f68623a43115182d991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067697174958993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bfc361f15572573d2773ebd096cf49964223a5aa1d402102ecc37ecfb1a14,PodSandboxId:c7bf4b267d73728fabdc475b8c3bda405ee0766ebf825ebee802b0fff6f94280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724067677098549114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2609971bbd7c8401e4db81e3ea55d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80,PodSandboxId:0461b7050ad929e00c6b4d08d2a1b22768d5b113605a89f03461fdd36b55fcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724067663642043955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe,PodSandboxId:82794ac70c5b241d166513b9bc0cd6d94f8d4c39869df9c48ed62cbc0a955c04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724067663844109398,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4ca83f
34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384,PodSandboxId:23eac961d8ef87891beee2753a152ae3b108e9854351da0e44d9a6e733bf348b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724067663691206920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e253084db077adff3d7dc910bd0fc
7c7d6eb0d8b1b91bcd1ebce47ff183cf7c,PodSandboxId:eb9d374c74b751a8c3b1af72dadb750590b5950d5abc0f2ef83c9dc2a955eb0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724067663581594688,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d,PodSandboxId:036
84d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724067663575899089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543,PodSandboxI
d:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724067663488713880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b,PodSandboxId:6d5134c5654f95d6f33a
686c2308569408d964927584f53b946162d8b76980cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657399532799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16,PodSandboxId:9737bdaf9902ef57d23d291717c6b0ba89bce621b66977ea9dc7febc9b09e758,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657381769049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724067158500933131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024223183467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024221425951,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724067012620055458,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724067008316858973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724066997299442286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724066997297013428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f80859b8-e45c-414e-a94b-c8966c4f82d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.763485504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9287416f-b4bd-47fa-838e-ef63c893dd9f name=/runtime.v1.RuntimeService/Version
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.763568884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9287416f-b4bd-47fa-838e-ef63c893dd9f name=/runtime.v1.RuntimeService/Version
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.764889105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0af9bbc6-78d1-4e6f-bb78-6c23c1ce1cae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.765702406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067997765610578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0af9bbc6-78d1-4e6f-bb78-6c23c1ce1cae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.766222299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8633909-007a-4774-87a0-d840e3bdb075 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.766281573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8633909-007a-4774-87a0-d840e3bdb075 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 11:46:37 ha-503856 crio[3547]: time="2024-08-19 11:46:37.766708009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa382afea1ef150e804db6767bc2ba83ea34772a4ba82ac8fcf6e82f909e789a,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724067739891631252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033,PodSandboxId:03684d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724067704893586022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4,PodSandboxId:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724067700894142300,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c49405a138a480ceb145b0c06fa0c19c7e9d8739b6e902e7728ff9edf8773ac,PodSandboxId:1b2812ce91f86e917f992cd073244caafedb88a7248b5086696b014ae2b5e617,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724067698892354341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c212413-ac90-45fb-92de-bfd9e9115540,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14db04765cb31df5016a695bff0a72927c98c131587e975c013da68f3b4a36f1,PodSandboxId:7371637445eb2b64a8e1a3fe4f4176c338d5e58f62c46f68623a43115182d991,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724067697174958993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3bfc361f15572573d2773ebd096cf49964223a5aa1d402102ecc37ecfb1a14,PodSandboxId:c7bf4b267d73728fabdc475b8c3bda405ee0766ebf825ebee802b0fff6f94280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724067677098549114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2609971bbd7c8401e4db81e3ea55d8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80,PodSandboxId:0461b7050ad929e00c6b4d08d2a1b22768d5b113605a89f03461fdd36b55fcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724067663642043955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe,PodSandboxId:82794ac70c5b241d166513b9bc0cd6d94f8d4c39869df9c48ed62cbc0a955c04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724067663844109398,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4ca83f
34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384,PodSandboxId:23eac961d8ef87891beee2753a152ae3b108e9854351da0e44d9a6e733bf348b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724067663691206920,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e253084db077adff3d7dc910bd0fc
7c7d6eb0d8b1b91bcd1ebce47ff183cf7c,PodSandboxId:eb9d374c74b751a8c3b1af72dadb750590b5950d5abc0f2ef83c9dc2a955eb0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724067663581594688,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d,PodSandboxId:036
84d4d6f92449da28270637db323b003f7aeb271286608f70d309e80fa980e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724067663575899089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22b93ecec4cfd4767d930360e5939ae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543,PodSandboxI
d:2c1006e249933ddf7526b0366e69a2d4560f64b7c3642b0bef90ead779de6d92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724067663488713880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147340613ea835c906d66a92a67bf8cf,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b,PodSandboxId:6d5134c5654f95d6f33a
686c2308569408d964927584f53b946162d8b76980cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657399532799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16,PodSandboxId:9737bdaf9902ef57d23d291717c6b0ba89bce621b66977ea9dc7febc9b09e758,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724067657381769049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a5ad9cc18e747407a6ebf193b0141d5d39e2506db7ff36e9f281ca31d07175,PodSandboxId:1191cb555eb550a2e7a62a8d5ad002f11d0ec47cb7997421e63fcfcdd619bb42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724067158500933131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7wpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2dd3c9-6ef2-4aaf-a44f-04fcd5b5ee2a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de,PodSandboxId:13c07aa9a002518609bd210554f3a19471e33ebfd9b816b1390235a4c81233e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024223183467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5dbrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5530828e-1061-434c-ad2f-80847f3ab171,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464,PodSandboxId:0b0b0a070f3ec813823103f1ea8a06af695e4510a42cd50f86da517d8da95ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724067024221425951,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-2jdlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad8206ac-dc67-4fcb-a4ad-a431f3b0b7cd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50,PodSandboxId:9079c84056e4b237e79326194bb75a2f176c9f8bf9556b3168d050890162ac6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724067012620055458,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-st2mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e7c93b-40a9-4902-b1a5-5a6bcc55735c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3,PodSandboxId:adace0914115cd0a47335d629a4cfbb92f7251bc72bb0a436ae6daeab0d4192d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724067008316858973,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8054009-c06a-4ccc-b6c4-22e0f6bb632a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674,PodSandboxId:982016c43ab0e7cfe37c1296b1e9abb139d5479ac81c6f267b9f0a41da9ab8a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724066997299442286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9aebe0e986b3c246dae4d64df3eee15,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a,PodSandboxId:eb7c9eb1ba04206c376798a19b430446dfef69f4a421e9d862de5c7e5f687408,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724066997297013428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-503856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d885daa5ebbe527098639eca601027a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8633909-007a-4774-87a0-d840e3bdb075 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fa382afea1ef1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   1b2812ce91f86       storage-provisioner
	21e59f8533645       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   03684d4d6f924       kube-controller-manager-ha-503856
	4c7e0e730267c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   2c1006e249933       kube-apiserver-ha-503856
	9c49405a138a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   1b2812ce91f86       storage-provisioner
	14db04765cb31       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   7371637445eb2       busybox-7dff88458-7wpbx
	0f3bfc361f155       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   c7bf4b267d737       kube-vip-ha-503856
	85182c790c374       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   82794ac70c5b2       kindnet-st2mx
	b4d4ca83f3458       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   23eac961d8ef8       kube-scheduler-ha-503856
	4410418eb581e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   0461b7050ad92       kube-proxy-d6zw9
	9e253084db077       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   eb9d374c74b75       etcd-ha-503856
	bb5dc24c345e0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   03684d4d6f924       kube-controller-manager-ha-503856
	db6351db80fdc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   2c1006e249933       kube-apiserver-ha-503856
	234d581f1f247       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   6d5134c5654f9       coredns-6f6b679f8f-2jdlw
	bb00d7b27f13d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   9737bdaf9902e       coredns-6f6b679f8f-5dbrz
	56a5ad9cc18e7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   1191cb555eb55       busybox-7dff88458-7wpbx
	e67513ebd15d0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   13c07aa9a0025       coredns-6f6b679f8f-5dbrz
	8315e44800080       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   0b0b0a070f3ec       coredns-6f6b679f8f-2jdlw
	1964134e9de80       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   9079c84056e4b       kindnet-st2mx
	68730d308f145       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   adace0914115c       kube-proxy-d6zw9
	ccea80d1a22a4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   982016c43ab0e       kube-scheduler-ha-503856
	3879d2de39f1c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   eb7c9eb1ba042       etcd-ha-503856
	
	
	==> coredns [234d581f1f24704042d96c4d8b3e01c914726f39a42283ba031a94abc64f2b3b] <==
	Trace[835521008]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:41:16.415)
	Trace[835521008]: [10.001471487s] [10.001471487s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1606563320]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 11:41:09.196) (total time: 10001ms):
	Trace[1606563320]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:41:19.198)
	Trace[1606563320]: [10.001554405s] [10.001554405s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:46290->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:46290->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8315e448000809a9b1ec9c80658a16ee2c0abdfb6a00d9cf88e506833fc2e464] <==
	[INFO] 10.244.3.2:59991 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001295677s
	[INFO] 10.244.3.2:36199 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168276s
	[INFO] 10.244.3.2:56390 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118777s
	[INFO] 10.244.3.2:60188 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110134s
	[INFO] 10.244.1.2:48283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110043s
	[INFO] 10.244.1.2:47868 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001551069s
	[INFO] 10.244.1.2:40080 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132463s
	[INFO] 10.244.1.2:39365 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001154088s
	[INFO] 10.244.1.2:42435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074226s
	[INFO] 10.244.0.4:41562 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076296s
	[INFO] 10.244.0.4:56190 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067218s
	[INFO] 10.244.3.2:36444 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119378s
	[INFO] 10.244.3.2:38880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151765s
	[INFO] 10.244.1.2:43281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016005s
	[INFO] 10.244.1.2:44768 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098293s
	[INFO] 10.244.0.4:42211 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129163s
	[INFO] 10.244.0.4:53178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082891s
	[INFO] 10.244.3.2:39486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118564s
	[INFO] 10.244.3.2:46262 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112723s
	[INFO] 10.244.3.2:50068 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106233s
	[INFO] 10.244.1.2:43781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134028s
	[INFO] 10.244.1.2:47607 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071487s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1857&timeout=6m44s&timeoutSeconds=404&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [bb00d7b27f13dda3e761726ad01d0c57671d29780260645aec6532f94a576f16] <==
	Trace[1639001552]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:41:16.655)
	Trace[1639001552]: [10.001488144s] [10.001488144s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40460->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40460->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e67513ebd15d07c15fbfc61b2c17800f56b9db39a3b81085018122fe96a1f9de] <==
	[INFO] 10.244.1.2:52489 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001893734s
	[INFO] 10.244.0.4:58770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111417s
	[INFO] 10.244.0.4:32786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159712s
	[INFO] 10.244.0.4:34773 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133937s
	[INFO] 10.244.0.4:34211 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003320974s
	[INFO] 10.244.0.4:44413 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105874s
	[INFO] 10.244.0.4:37795 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067103s
	[INFO] 10.244.3.2:48365 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108129s
	[INFO] 10.244.3.2:35563 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101277s
	[INFO] 10.244.1.2:41209 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111152s
	[INFO] 10.244.1.2:59241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195927s
	[INFO] 10.244.1.2:32916 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097287s
	[INFO] 10.244.0.4:53548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104877s
	[INFO] 10.244.0.4:55650 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105726s
	[INFO] 10.244.3.2:40741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204087s
	[INFO] 10.244.3.2:41373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105987s
	[INFO] 10.244.1.2:57537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000193166s
	[INFO] 10.244.1.2:40497 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080869s
	[INFO] 10.244.0.4:33281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136165s
	[INFO] 10.244.0.4:49164 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000302537s
	[INFO] 10.244.3.2:54372 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157216s
	[INFO] 10.244.1.2:40968 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206142s
	[INFO] 10.244.1.2:54797 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102712s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-503856
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_30_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:30:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:46:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:41:45 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:41:45 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:41:45 +0000   Mon, 19 Aug 2024 11:30:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:41:45 +0000   Mon, 19 Aug 2024 11:30:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-503856
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebf7fa993760403a8b3080e5ea2bdf31
	  System UUID:                ebf7fa99-3760-403a-8b30-80e5ea2bdf31
	  Boot ID:                    f3b2611c-5dfd-45ef-8747-94b35364374b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7wpbx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-2jdlw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-5dbrz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-503856                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-st2mx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-503856             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-503856    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-d6zw9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-503856             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-503856                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m50s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-503856 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-503856 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-503856 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                    node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-503856 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Warning  ContainerGCFailed        6m33s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m54s (x3 over 6m43s)  kubelet          Node ha-503856 status is now: NodeNotReady
	  Normal   RegisteredNode           4m54s                  node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal   RegisteredNode           4m51s                  node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-503856 event: Registered Node ha-503856 in Controller
	
	
	Name:               ha-503856-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_30_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:30:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:42:29 +0000   Mon, 19 Aug 2024 11:41:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:42:29 +0000   Mon, 19 Aug 2024 11:41:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:42:29 +0000   Mon, 19 Aug 2024 11:41:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:42:29 +0000   Mon, 19 Aug 2024 11:41:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-503856-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a5c9c65d0cb479397609eb1cad01b44
	  System UUID:                9a5c9c65-d0cb-4793-9760-9eb1cad01b44
	  Boot ID:                    61f3472a-e7db-4faa-8ee3-b445e7a5d07f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nxhq6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-503856-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-rnjwj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-503856-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-503856-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-j2f6h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-503856-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-503856-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-503856-m02 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     15m                    cidrAllocator    Node ha-503856-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-503856-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-503856-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                    node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-503856-m02 status is now: NodeNotReady
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m18s)  kubelet          Node ha-503856-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m18s)  kubelet          Node ha-503856-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m18s)  kubelet          Node ha-503856-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-503856-m02 event: Registered Node ha-503856-m02 in Controller
	
	
	Name:               ha-503856-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-503856-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=ha-503856
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T11_33_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:33:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-503856-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:44:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 11:43:50 +0000   Mon, 19 Aug 2024 11:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 11:43:50 +0000   Mon, 19 Aug 2024 11:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 11:43:50 +0000   Mon, 19 Aug 2024 11:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 11:43:50 +0000   Mon, 19 Aug 2024 11:44:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-503856-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fb3b2ab1e7b42139f0ea868d31218ff
	  System UUID:                9fb3b2ab-1e7b-4213-9f0e-a868d31218ff
	  Boot ID:                    89162d9e-d5e5-43be-8fc5-f8d1a501012c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-f5g8l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-h29sh              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-4kpcq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-503856-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-503856-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-503856-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-503856-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-503856-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m54s                  node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   RegisteredNode           4m51s                  node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   NodeNotReady             4m14s                  node-controller  Node ha-503856-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-503856-m04 event: Registered Node ha-503856-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-503856-m04 has been rebooted, boot id: 89162d9e-d5e5-43be-8fc5-f8d1a501012c
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-503856-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-503856-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-503856-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m48s                  kubelet          Node ha-503856-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-503856-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.042926] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.060482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062102] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.195986] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.137965] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.282518] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.003020] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.667712] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.056031] kauditd_printk_skb: 158 callbacks suppressed
	[Aug19 11:30] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.046565] kauditd_printk_skb: 60 callbacks suppressed
	[Aug19 11:31] kauditd_printk_skb: 24 callbacks suppressed
	[Aug19 11:40] systemd-fstab-generator[3466]: Ignoring "noauto" option for root device
	[  +0.145428] systemd-fstab-generator[3478]: Ignoring "noauto" option for root device
	[  +0.172464] systemd-fstab-generator[3492]: Ignoring "noauto" option for root device
	[  +0.149641] systemd-fstab-generator[3504]: Ignoring "noauto" option for root device
	[  +0.301323] systemd-fstab-generator[3532]: Ignoring "noauto" option for root device
	[  +4.821377] systemd-fstab-generator[3632]: Ignoring "noauto" option for root device
	[  +0.087417] kauditd_printk_skb: 100 callbacks suppressed
	[Aug19 11:41] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.969806] kauditd_printk_skb: 65 callbacks suppressed
	[ +10.054793] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.863278] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [3879d2de39f1c9189c2e3eb16e843ddcbe448dd4be3f2b5ba5617abddfeb007a] <==
	{"level":"info","ts":"2024-08-19T11:39:19.783922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.784093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.784131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe received MsgPreVoteResp from 6b93c4bc4617b0fe at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.784167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe [logterm: 2, index: 2188] sent MsgPreVote request to a2b83f2dcb1ed0d at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.784193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe [logterm: 2, index: 2188] sent MsgPreVote request to 4ad1f16cda0ec14b at term 2"}
	{"level":"info","ts":"2024-08-19T11:39:19.834466Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b93c4bc4617b0fe","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T11:39:19.834727Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.834796Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.834836Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835054Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835173Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835279Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835336Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:39:19.835363Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835436Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835502Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835587Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835691Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835804Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.835878Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a2b83f2dcb1ed0d"}
	{"level":"info","ts":"2024-08-19T11:39:19.839726Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"warn","ts":"2024-08-19T11:39:19.839763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.002045175s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-19T11:39:19.839878Z","caller":"traceutil/trace.go:171","msg":"trace[1982300054] range","detail":"{range_begin:; range_end:; }","duration":"9.002180063s","start":"2024-08-19T11:39:10.837688Z","end":"2024-08-19T11:39:19.839868Z","steps":["trace[1982300054] 'agreement among raft nodes before linearized reading'  (duration: 9.002043282s)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T11:39:19.839928Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-08-19T11:39:19.839979Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-503856","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.102:2380"],"advertise-client-urls":["https://192.168.39.102:2379"]}
	
	
	==> etcd [9e253084db077adff3d7dc910bd0fc7c7d6eb0d8b1b91bcd1ebce47ff183cf7c] <==
	{"level":"info","ts":"2024-08-19T11:43:16.301643Z","caller":"traceutil/trace.go:171","msg":"trace[535509330] transaction","detail":"{read_only:false; response_revision:2425; number_of_response:1; }","duration":"121.698974ms","start":"2024-08-19T11:43:16.179925Z","end":"2024-08-19T11:43:16.301624Z","steps":["trace[535509330] 'process raft request'  (duration: 121.54084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:43:54.140116Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"4ad1f16cda0ec14b","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"51.684791ms"}
	{"level":"warn","ts":"2024-08-19T11:43:54.140245Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a2b83f2dcb1ed0d","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"51.818635ms"}
	{"level":"info","ts":"2024-08-19T11:43:54.142717Z","caller":"traceutil/trace.go:171","msg":"trace[987334433] linearizableReadLoop","detail":"{readStateIndex:3009; appliedIndex:3010; }","duration":"140.737171ms","start":"2024-08-19T11:43:54.001897Z","end":"2024-08-19T11:43:54.142634Z","steps":["trace[987334433] 'read index received'  (duration: 140.730843ms)","trace[987334433] 'applied index is now lower than readState.Index'  (duration: 5.313µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:43:54.144022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.12035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-503856-m03\" ","response":"range_response_count:1 size:5879"}
	{"level":"info","ts":"2024-08-19T11:43:54.144164Z","caller":"traceutil/trace.go:171","msg":"trace[999833495] range","detail":"{range_begin:/registry/minions/ha-503856-m03; range_end:; response_count:1; response_revision:2576; }","duration":"142.257507ms","start":"2024-08-19T11:43:54.001892Z","end":"2024-08-19T11:43:54.144150Z","steps":["trace[999833495] 'agreement among raft nodes before linearized reading'  (duration: 140.965938ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:43:54.144422Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.277833ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:43:54.145149Z","caller":"traceutil/trace.go:171","msg":"trace[1643953805] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2576; }","duration":"140.503742ms","start":"2024-08-19T11:43:54.004634Z","end":"2024-08-19T11:43:54.145137Z","steps":["trace[1643953805] 'agreement among raft nodes before linearized reading'  (duration: 138.826694ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:44:04.605799Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.122:57302","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-19T11:44:04.636137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe switched to configuration voters=(732824443485809933 7751755696543609086)"}
	{"level":"info","ts":"2024-08-19T11:44:04.638223Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1cdd3ec65c5f94ba","local-member-id":"6b93c4bc4617b0fe","removed-remote-peer-id":"4ad1f16cda0ec14b","removed-remote-peer-urls":["https://192.168.39.122:2380"]}
	{"level":"info","ts":"2024-08-19T11:44:04.638333Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"warn","ts":"2024-08-19T11:44:04.638759Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:44:04.638841Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"warn","ts":"2024-08-19T11:44:04.639617Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:44:04.639699Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:44:04.639949Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"warn","ts":"2024-08-19T11:44:04.640240Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b","error":"context canceled"}
	{"level":"warn","ts":"2024-08-19T11:44:04.640309Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"4ad1f16cda0ec14b","error":"failed to read 4ad1f16cda0ec14b on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-19T11:44:04.640362Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"warn","ts":"2024-08-19T11:44:04.640543Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-08-19T11:44:04.640596Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:44:04.640650Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"info","ts":"2024-08-19T11:44:04.640686Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6b93c4bc4617b0fe","removed-remote-peer-id":"4ad1f16cda0ec14b"}
	{"level":"warn","ts":"2024-08-19T11:44:04.656536Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6b93c4bc4617b0fe","remote-peer-id-stream-handler":"6b93c4bc4617b0fe","remote-peer-id-from":"4ad1f16cda0ec14b"}
	
	
	==> kernel <==
	 11:46:38 up 17 min,  0 users,  load average: 0.60, 0.62, 0.39
	Linux ha-503856 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1964134e9de80be5a82ee203eebe9f5718ba2a8b84e55479e47e81a74a259a50] <==
	I0819 11:38:43.533431       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:38:53.530883       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:38:53.530926       1 main.go:299] handling current node
	I0819 11:38:53.530945       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:38:53.530953       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:38:53.531160       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:38:53.531188       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:38:53.531272       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:38:53.531296       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:39:03.530516       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:39:03.530656       1 main.go:299] handling current node
	I0819 11:39:03.530688       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:39:03.530744       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:39:03.530936       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:39:03.530979       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:39:03.531136       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:39:03.531180       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:39:13.531184       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0819 11:39:13.531286       1 main.go:322] Node ha-503856-m03 has CIDR [10.244.3.0/24] 
	I0819 11:39:13.531424       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:39:13.531446       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:39:13.531528       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:39:13.531547       1 main.go:299] handling current node
	I0819 11:39:13.531568       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:39:13.531583       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [85182c790c374aa6e461a0c8e3ed7dbd713c45e212f3abf2baa82b7db579dcbe] <==
	I0819 11:45:54.756479       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:46:04.749006       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:46:04.749112       1 main.go:299] handling current node
	I0819 11:46:04.749128       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:46:04.749138       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:46:04.749261       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:46:04.749282       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:46:14.749193       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:46:14.749249       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:46:14.749392       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:46:14.749418       1 main.go:299] handling current node
	I0819 11:46:14.749432       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:46:14.749437       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:46:24.756484       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:46:24.756592       1 main.go:299] handling current node
	I0819 11:46:24.756620       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:46:24.756637       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:46:24.756779       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:46:24.756803       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	I0819 11:46:34.758615       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0819 11:46:34.758659       1 main.go:299] handling current node
	I0819 11:46:34.758700       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0819 11:46:34.758707       1 main.go:322] Node ha-503856-m02 has CIDR [10.244.1.0/24] 
	I0819 11:46:34.758839       1 main.go:295] Handling node with IPs: map[192.168.39.161:{}]
	I0819 11:46:34.758862       1 main.go:322] Node ha-503856-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [4c7e0e730267c99dc16ff00240be2f1ea3289554181af57aeb7d2da5ff7c80a4] <==
	I0819 11:41:42.847734       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0819 11:41:42.847877       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 11:41:42.904325       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 11:41:42.917098       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 11:41:42.917138       1 policy_source.go:224] refreshing policies
	I0819 11:41:42.923333       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 11:41:42.923439       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 11:41:42.923557       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 11:41:42.923592       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 11:41:42.923631       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 11:41:42.923661       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 11:41:42.926656       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 11:41:42.928953       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0819 11:41:42.940394       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.122]
	I0819 11:41:42.943031       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 11:41:42.948220       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 11:41:42.948410       1 aggregator.go:171] initial CRD sync complete...
	I0819 11:41:42.948473       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 11:41:42.948497       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 11:41:42.948580       1 cache.go:39] Caches are synced for autoregister controller
	I0819 11:41:42.953187       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 11:41:42.961676       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 11:41:42.991210       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 11:41:43.829426       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 11:41:44.275662       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.122 192.168.39.183]
	
	
	==> kube-apiserver [db6351db80fdc3b8572db21488088e0c429be0a4387ee0793081531df343c543] <==
	I0819 11:41:03.923339       1 options.go:228] external host was not specified, using 192.168.39.102
	I0819 11:41:03.931607       1 server.go:142] Version: v1.31.0
	I0819 11:41:03.931667       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:41:04.500649       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 11:41:04.504133       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 11:41:04.505619       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 11:41:04.505652       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 11:41:04.505838       1 instance.go:232] Using reconciler: lease
	W0819 11:41:24.500622       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0819 11:41:24.500688       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0819 11:41:24.507493       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0819 11:41:24.507573       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [21e59f85336459296bb85fc223b61e6ee435d0dfdac2c736a94f9c4d32df3033] <==
	I0819 11:44:01.601699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.192µs"
	I0819 11:44:03.492362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.136µs"
	I0819 11:44:03.594893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.369959ms"
	I0819 11:44:03.595005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.193µs"
	I0819 11:44:03.934623       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.242µs"
	I0819 11:44:03.941691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.625µs"
	I0819 11:44:15.579320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-503856-m04"
	I0819 11:44:15.579767       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m03"
	E0819 11:44:15.625639       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-503856-m03\", UID:\"2e45244d-a638-437b-b563-b0e1153e99ee\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-503856-m03\", UID:\"0156637e-06f9-472f-8c19-64c789e14a1a\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-503856-m03\" not found" logger="UnhandledError"
	E0819 11:44:27.315365       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:27.315493       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:27.315522       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:27.315558       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:27.315581       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:47.316157       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:47.316194       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:47.316202       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:47.316207       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	E0819 11:44:47.316212       1 gc_controller.go:151] "Failed to get node" err="node \"ha-503856-m03\" not found" logger="pod-garbage-collector-controller" node="ha-503856-m03"
	I0819 11:44:52.347384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:44:52.368885       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:44:52.414433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.386452ms"
	I0819 11:44:52.414588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.347µs"
	I0819 11:44:55.254514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	I0819 11:44:57.533027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-503856-m04"
	
	
	==> kube-controller-manager [bb5dc24c345e023d2c1b39aeaa399932e0075a1d509e7efe0fc630b13b34930d] <==
	I0819 11:41:04.713046       1 serving.go:386] Generated self-signed cert in-memory
	I0819 11:41:04.976266       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 11:41:04.976307       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:41:04.978945       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 11:41:04.979618       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 11:41:04.979752       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 11:41:04.979826       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0819 11:41:25.514043       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.102:8443/healthz\": dial tcp 192.168.39.102:8443: connect: connection refused"
	
	
	==> kube-proxy [4410418eb581e358ab209bde84564e655ccd1d460399386fe5db165fc84bfc80] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 11:41:07.775872       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 11:41:10.848120       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 11:41:13.919575       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 11:41:20.063786       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 11:41:29.280098       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-503856\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 11:41:47.629754       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E0819 11:41:47.629925       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:41:47.669393       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 11:41:47.669440       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 11:41:47.669470       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:41:47.671873       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:41:47.672151       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:41:47.672175       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:41:47.673705       1 config.go:197] "Starting service config controller"
	I0819 11:41:47.673743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:41:47.673764       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:41:47.673768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:41:47.674225       1 config.go:326] "Starting node config controller"
	I0819 11:41:47.674248       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:41:47.774539       1 shared_informer.go:320] Caches are synced for node config
	I0819 11:41:47.774556       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:41:47.774569       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [68730d308f1456cb2e025409c4d2edab9b17464c74f9beabfafc13246e8cc2b3] <==
	E0819 11:38:15.745903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:15.746110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:15.746234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:18.815500       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:18.815957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:18.816701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:18.817349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:21.888843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:21.888953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:24.961206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0819 11:38:24.961249       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:24.961514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0819 11:38:24.961386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:34.176547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:34.176654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:34.176693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:34.176752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:37.248697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:37.249354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:49.536621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:49.536971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1829\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:55.680723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:55.680810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-503856&resourceVersion=1833\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 11:38:55.680756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 11:38:55.680968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1857\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [b4d4ca83f34581802d5c17fb457aa924254ed7a3a6c474aecf081a05b7b0d384] <==
	W0819 11:41:33.696255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.102:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:33.696318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.102:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:34.705908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:34.706026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.102:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:34.833939       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:34.834014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:34.834559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:34.834623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.102:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:34.896697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:34.896790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:35.459416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.102:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:35.459494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.102:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:40.322581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.102:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0819 11:41:40.322768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.102:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.102:8443: connect: connection refused" logger="UnhandledError"
	W0819 11:41:42.853678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:41:42.853839       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:41:42.854030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:41:42.854176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:41:42.856550       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:41:42.856642       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 11:42:00.528143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 11:44:01.390468       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-f5g8l\": pod busybox-7dff88458-f5g8l is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-f5g8l" node="ha-503856-m04"
	E0819 11:44:01.390540       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e9b2c597-c9e4-4666-b0b6-5a46cf2370f5(default/busybox-7dff88458-f5g8l) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-f5g8l"
	E0819 11:44:01.390563       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-f5g8l\": pod busybox-7dff88458-f5g8l is already assigned to node \"ha-503856-m04\"" pod="default/busybox-7dff88458-f5g8l"
	I0819 11:44:01.390584       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-f5g8l" node="ha-503856-m04"
	
	
	==> kube-scheduler [ccea80d1a22a4536a82c166633220aa67d32633919751a2cf1da7f06c91bc674] <==
	W0819 11:30:01.659419       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:30:01.660325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 11:30:03.075764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 11:33:11.218857       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-h29sh\": pod kindnet-h29sh is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-h29sh" node="ha-503856-m04"
	E0819 11:33:11.219015       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-h29sh\": pod kindnet-h29sh is already assigned to node \"ha-503856-m04\"" pod="kube-system/kindnet-h29sh"
	E0819 11:33:11.221900       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4kpcq\": pod kube-proxy-4kpcq is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4kpcq" node="ha-503856-m04"
	E0819 11:33:11.221962       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f038ca5-2e98-4126-9959-f24f6ab3a802(kube-system/kube-proxy-4kpcq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4kpcq"
	E0819 11:33:11.221977       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4kpcq\": pod kube-proxy-4kpcq is already assigned to node \"ha-503856-m04\"" pod="kube-system/kube-proxy-4kpcq"
	I0819 11:33:11.222009       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4kpcq" node="ha-503856-m04"
	E0819 11:33:11.260369       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5zzk5\": pod kube-proxy-5zzk5 is already assigned to node \"ha-503856-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5zzk5" node="ha-503856-m04"
	E0819 11:33:11.260439       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 29216c29-6ceb-411d-a714-c94d674aed3f(kube-system/kube-proxy-5zzk5) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5zzk5"
	E0819 11:33:11.260454       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5zzk5\": pod kube-proxy-5zzk5 is already assigned to node \"ha-503856-m04\"" pod="kube-system/kube-proxy-5zzk5"
	I0819 11:33:11.260471       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5zzk5" node="ha-503856-m04"
	E0819 11:39:11.384170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0819 11:39:11.514012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0819 11:39:15.748727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 11:39:17.406249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 11:39:17.824460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0819 11:39:17.971967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0819 11:39:18.330384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 11:39:18.761341       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0819 11:39:18.769619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 11:39:18.802761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 11:39:19.342163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 11:39:19.666559       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 11:45:06 ha-503856 kubelet[1331]: E0819 11:45:06.137362    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067906136997405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:06 ha-503856 kubelet[1331]: E0819 11:45:06.137389    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067906136997405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:16 ha-503856 kubelet[1331]: E0819 11:45:16.139549    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067916139226424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:16 ha-503856 kubelet[1331]: E0819 11:45:16.139655    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067916139226424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:26 ha-503856 kubelet[1331]: E0819 11:45:26.141183    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067926140701949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:26 ha-503856 kubelet[1331]: E0819 11:45:26.141249    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067926140701949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:36 ha-503856 kubelet[1331]: E0819 11:45:36.142698    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067936142355845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:36 ha-503856 kubelet[1331]: E0819 11:45:36.142978    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067936142355845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:46 ha-503856 kubelet[1331]: E0819 11:45:46.145872    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067946145406204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:46 ha-503856 kubelet[1331]: E0819 11:45:46.145924    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067946145406204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:56 ha-503856 kubelet[1331]: E0819 11:45:56.148348    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067956147919689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:45:56 ha-503856 kubelet[1331]: E0819 11:45:56.148777    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067956147919689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:46:05 ha-503856 kubelet[1331]: E0819 11:46:05.894639    1331 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 11:46:05 ha-503856 kubelet[1331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 11:46:05 ha-503856 kubelet[1331]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 11:46:05 ha-503856 kubelet[1331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 11:46:05 ha-503856 kubelet[1331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 11:46:06 ha-503856 kubelet[1331]: E0819 11:46:06.150968    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067966150541204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:46:06 ha-503856 kubelet[1331]: E0819 11:46:06.150993    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067966150541204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:46:16 ha-503856 kubelet[1331]: E0819 11:46:16.152906    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067976152616315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:46:16 ha-503856 kubelet[1331]: E0819 11:46:16.153321    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067976152616315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:46:26 ha-503856 kubelet[1331]: E0819 11:46:26.155649    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067986155308652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:46:26 ha-503856 kubelet[1331]: E0819 11:46:26.155691    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067986155308652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:46:36 ha-503856 kubelet[1331]: E0819 11:46:36.157441    1331 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067996156984291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 11:46:36 ha-503856 kubelet[1331]: E0819 11:46:36.157711    1331 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724067996156984291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:46:37.338442  130069 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19476-99410/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-503856 -n ha-503856
helpers_test.go:261: (dbg) Run:  kubectl --context ha-503856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (330.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-320821
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-320821
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-320821: exit status 82 (2m1.779783241s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-320821-m03"  ...
	* Stopping node "multinode-320821-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-320821" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320821 --wait=true -v=8 --alsologtostderr
E0819 12:03:35.349122  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:06:38.415975  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-320821 --wait=true -v=8 --alsologtostderr: (3m26.541418995s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-320821
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-320821 -n multinode-320821
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-320821 logs -n 25: (1.394780567s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m02:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile690601289/001/cp-test_multinode-320821-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m02:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821:/home/docker/cp-test_multinode-320821-m02_multinode-320821.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821 sudo cat                                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m02_multinode-320821.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m02:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03:/home/docker/cp-test_multinode-320821-m02_multinode-320821-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821-m03 sudo cat                                   | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m02_multinode-320821-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp testdata/cp-test.txt                                                | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile690601289/001/cp-test_multinode-320821-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821:/home/docker/cp-test_multinode-320821-m03_multinode-320821.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821 sudo cat                                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m03_multinode-320821.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02:/home/docker/cp-test_multinode-320821-m03_multinode-320821-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821-m02 sudo cat                                   | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m03_multinode-320821-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-320821 node stop m03                                                          | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	| node    | multinode-320821 node start                                                             | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-320821                                                                | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC |                     |
	| stop    | -p multinode-320821                                                                     | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC |                     |
	| start   | -p multinode-320821                                                                     | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-320821                                                                | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:03:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:03:27.535612  139391 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:03:27.535759  139391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:03:27.535769  139391 out.go:358] Setting ErrFile to fd 2...
	I0819 12:03:27.535773  139391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:03:27.535997  139391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:03:27.536555  139391 out.go:352] Setting JSON to false
	I0819 12:03:27.537521  139391 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6354,"bootTime":1724062654,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:03:27.537584  139391 start.go:139] virtualization: kvm guest
	I0819 12:03:27.539689  139391 out.go:177] * [multinode-320821] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:03:27.541284  139391 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:03:27.541291  139391 notify.go:220] Checking for updates...
	I0819 12:03:27.542668  139391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:03:27.544095  139391 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:03:27.545709  139391 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:03:27.547359  139391 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:03:27.548555  139391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:03:27.550154  139391 config.go:182] Loaded profile config "multinode-320821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:03:27.550250  139391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:03:27.550677  139391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:03:27.550723  139391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:03:27.566094  139391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 12:03:27.566602  139391 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:03:27.567349  139391 main.go:141] libmachine: Using API Version  1
	I0819 12:03:27.567384  139391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:03:27.567759  139391 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:03:27.567943  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:03:27.606818  139391 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:03:27.608163  139391 start.go:297] selected driver: kvm2
	I0819 12:03:27.608191  139391 start.go:901] validating driver "kvm2" against &{Name:multinode-320821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-320821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.19 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:03:27.608337  139391 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:03:27.608747  139391 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:03:27.608831  139391 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:03:27.625037  139391 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:03:27.625809  139391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:03:27.625854  139391 cni.go:84] Creating CNI manager for ""
	I0819 12:03:27.625862  139391 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 12:03:27.625910  139391 start.go:340] cluster config:
	{Name:multinode-320821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-320821 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.19 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:03:27.626018  139391 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:03:27.627849  139391 out.go:177] * Starting "multinode-320821" primary control-plane node in "multinode-320821" cluster
	I0819 12:03:27.629126  139391 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:03:27.629166  139391 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:03:27.629175  139391 cache.go:56] Caching tarball of preloaded images
	I0819 12:03:27.629254  139391 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:03:27.629266  139391 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:03:27.629374  139391 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/config.json ...
	I0819 12:03:27.629583  139391 start.go:360] acquireMachinesLock for multinode-320821: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:03:27.629625  139391 start.go:364] duration metric: took 23.08µs to acquireMachinesLock for "multinode-320821"
	I0819 12:03:27.629640  139391 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:03:27.629648  139391 fix.go:54] fixHost starting: 
	I0819 12:03:27.629917  139391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:03:27.629950  139391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:03:27.644835  139391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0819 12:03:27.645245  139391 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:03:27.645784  139391 main.go:141] libmachine: Using API Version  1
	I0819 12:03:27.645807  139391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:03:27.646118  139391 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:03:27.646330  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:03:27.646473  139391 main.go:141] libmachine: (multinode-320821) Calling .GetState
	I0819 12:03:27.648176  139391 fix.go:112] recreateIfNeeded on multinode-320821: state=Running err=<nil>
	W0819 12:03:27.648196  139391 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:03:27.649856  139391 out.go:177] * Updating the running kvm2 "multinode-320821" VM ...
	I0819 12:03:27.651100  139391 machine.go:93] provisionDockerMachine start ...
	I0819 12:03:27.651126  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:03:27.651377  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:27.653913  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.654479  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:27.654505  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.654675  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:27.654892  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.655052  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.655197  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:27.655389  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:03:27.655577  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:03:27.655590  139391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:03:27.763957  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-320821
	
	I0819 12:03:27.763983  139391 main.go:141] libmachine: (multinode-320821) Calling .GetMachineName
	I0819 12:03:27.764258  139391 buildroot.go:166] provisioning hostname "multinode-320821"
	I0819 12:03:27.764284  139391 main.go:141] libmachine: (multinode-320821) Calling .GetMachineName
	I0819 12:03:27.764518  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:27.767215  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.767573  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:27.767601  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.767715  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:27.767980  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.768155  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.768282  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:27.768452  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:03:27.768650  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:03:27.768666  139391 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-320821 && echo "multinode-320821" | sudo tee /etc/hostname
	I0819 12:03:27.889999  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-320821
	
	I0819 12:03:27.890029  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:27.892692  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.893003  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:27.893036  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.893194  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:27.893389  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.893562  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.893692  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:27.893888  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:03:27.894057  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:03:27.894073  139391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-320821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-320821/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-320821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:03:28.000583  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:03:28.000615  139391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:03:28.000660  139391 buildroot.go:174] setting up certificates
	I0819 12:03:28.000670  139391 provision.go:84] configureAuth start
	I0819 12:03:28.000680  139391 main.go:141] libmachine: (multinode-320821) Calling .GetMachineName
	I0819 12:03:28.001062  139391 main.go:141] libmachine: (multinode-320821) Calling .GetIP
	I0819 12:03:28.003574  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.003986  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:28.004015  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.004161  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:28.006399  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.006731  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:28.006777  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.006904  139391 provision.go:143] copyHostCerts
	I0819 12:03:28.006942  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:03:28.006981  139391 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:03:28.007012  139391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:03:28.007096  139391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:03:28.007190  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:03:28.007213  139391 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:03:28.007222  139391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:03:28.007258  139391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:03:28.007321  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:03:28.007344  139391 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:03:28.007352  139391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:03:28.007386  139391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:03:28.007451  139391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.multinode-320821 san=[127.0.0.1 192.168.39.88 localhost minikube multinode-320821]
	I0819 12:03:28.577557  139391 provision.go:177] copyRemoteCerts
	I0819 12:03:28.577618  139391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:03:28.577643  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:28.580552  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.580908  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:28.580946  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.581095  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:28.581333  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:28.581509  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:28.581668  139391 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:03:28.667128  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:03:28.667212  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:03:28.695545  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:03:28.695631  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 12:03:28.721907  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:03:28.721985  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:03:28.747207  139391 provision.go:87] duration metric: took 746.522403ms to configureAuth
	I0819 12:03:28.747237  139391 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:03:28.747501  139391 config.go:182] Loaded profile config "multinode-320821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:03:28.747603  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:28.750302  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.750693  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:28.750729  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.750948  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:28.751165  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:28.751335  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:28.751499  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:28.751682  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:03:28.751880  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:03:28.751895  139391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:04:59.515518  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:04:59.515580  139391 machine.go:96] duration metric: took 1m31.864455758s to provisionDockerMachine
	I0819 12:04:59.515602  139391 start.go:293] postStartSetup for "multinode-320821" (driver="kvm2")
	I0819 12:04:59.515618  139391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:04:59.515645  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.516012  139391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:04:59.516044  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:04:59.519381  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.519856  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.519884  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.520090  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:04:59.520309  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.520474  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:04:59.520596  139391 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:04:59.602664  139391 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:04:59.606735  139391 command_runner.go:130] > NAME=Buildroot
	I0819 12:04:59.606765  139391 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 12:04:59.606770  139391 command_runner.go:130] > ID=buildroot
	I0819 12:04:59.606775  139391 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 12:04:59.606780  139391 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 12:04:59.607093  139391 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:04:59.607125  139391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:04:59.607204  139391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:04:59.607293  139391 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:04:59.607309  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 12:04:59.607421  139391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:04:59.616930  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:04:59.640345  139391 start.go:296] duration metric: took 124.725493ms for postStartSetup
	I0819 12:04:59.640395  139391 fix.go:56] duration metric: took 1m32.010746949s for fixHost
	I0819 12:04:59.640421  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:04:59.643353  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.643896  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.643939  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.644114  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:04:59.644334  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.644527  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.644669  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:04:59.644818  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:04:59.645022  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:04:59.645034  139391 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:04:59.748480  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069099.724700385
	
	I0819 12:04:59.748510  139391 fix.go:216] guest clock: 1724069099.724700385
	I0819 12:04:59.748522  139391 fix.go:229] Guest: 2024-08-19 12:04:59.724700385 +0000 UTC Remote: 2024-08-19 12:04:59.640401835 +0000 UTC m=+92.144249650 (delta=84.29855ms)
	I0819 12:04:59.748556  139391 fix.go:200] guest clock delta is within tolerance: 84.29855ms
	I0819 12:04:59.748563  139391 start.go:83] releasing machines lock for "multinode-320821", held for 1m32.118928184s
	I0819 12:04:59.748591  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.748908  139391 main.go:141] libmachine: (multinode-320821) Calling .GetIP
	I0819 12:04:59.751690  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.752117  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.752153  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.752284  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.752814  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.753001  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.753071  139391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:04:59.753136  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:04:59.753257  139391 ssh_runner.go:195] Run: cat /version.json
	I0819 12:04:59.753279  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:04:59.755679  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.756035  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.756065  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.756088  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.756213  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:04:59.756391  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.756454  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.756481  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.756551  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:04:59.756666  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:04:59.756745  139391 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:04:59.756851  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.756977  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:04:59.757113  139391 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:04:59.841194  139391 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 12:04:59.841364  139391 ssh_runner.go:195] Run: systemctl --version
	I0819 12:04:59.864130  139391 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 12:04:59.865028  139391 command_runner.go:130] > systemd 252 (252)
	I0819 12:04:59.865061  139391 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 12:04:59.865114  139391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:05:00.016532  139391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 12:05:00.024850  139391 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 12:05:00.024925  139391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:05:00.025010  139391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:05:00.034121  139391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:05:00.034160  139391 start.go:495] detecting cgroup driver to use...
	I0819 12:05:00.034243  139391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:05:00.051507  139391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:05:00.065018  139391 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:05:00.065101  139391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:05:00.078730  139391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:05:00.092719  139391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:05:00.227017  139391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:05:00.362161  139391 docker.go:233] disabling docker service ...
	I0819 12:05:00.362228  139391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:05:00.379630  139391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:05:00.393082  139391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:05:00.531254  139391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:05:00.672253  139391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:05:00.688645  139391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:05:00.707575  139391 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 12:05:00.707636  139391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:05:00.707683  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.718359  139391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:05:00.718438  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.729156  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.739812  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.750337  139391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:05:00.761419  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.772023  139391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.783220  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.793819  139391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:05:00.806113  139391 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 12:05:00.806284  139391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:05:00.826838  139391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:05:00.972814  139391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:05:11.010061  139391 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.037202005s)
	I0819 12:05:11.010102  139391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:05:11.010200  139391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:05:11.015047  139391 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 12:05:11.015077  139391 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 12:05:11.015088  139391 command_runner.go:130] > Device: 0,22	Inode: 1349        Links: 1
	I0819 12:05:11.015097  139391 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 12:05:11.015102  139391 command_runner.go:130] > Access: 2024-08-19 12:05:10.880618007 +0000
	I0819 12:05:11.015108  139391 command_runner.go:130] > Modify: 2024-08-19 12:05:10.880618007 +0000
	I0819 12:05:11.015113  139391 command_runner.go:130] > Change: 2024-08-19 12:05:10.880618007 +0000
	I0819 12:05:11.015117  139391 command_runner.go:130] >  Birth: -
	I0819 12:05:11.015146  139391 start.go:563] Will wait 60s for crictl version
	I0819 12:05:11.015191  139391 ssh_runner.go:195] Run: which crictl
	I0819 12:05:11.019009  139391 command_runner.go:130] > /usr/bin/crictl
	I0819 12:05:11.019090  139391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:05:11.053069  139391 command_runner.go:130] > Version:  0.1.0
	I0819 12:05:11.053107  139391 command_runner.go:130] > RuntimeName:  cri-o
	I0819 12:05:11.053114  139391 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 12:05:11.053123  139391 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 12:05:11.054275  139391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:05:11.054352  139391 ssh_runner.go:195] Run: crio --version
	I0819 12:05:11.085702  139391 command_runner.go:130] > crio version 1.29.1
	I0819 12:05:11.085740  139391 command_runner.go:130] > Version:        1.29.1
	I0819 12:05:11.085749  139391 command_runner.go:130] > GitCommit:      unknown
	I0819 12:05:11.085755  139391 command_runner.go:130] > GitCommitDate:  unknown
	I0819 12:05:11.085760  139391 command_runner.go:130] > GitTreeState:   clean
	I0819 12:05:11.085771  139391 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 12:05:11.085777  139391 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 12:05:11.085783  139391 command_runner.go:130] > Compiler:       gc
	I0819 12:05:11.085789  139391 command_runner.go:130] > Platform:       linux/amd64
	I0819 12:05:11.085805  139391 command_runner.go:130] > Linkmode:       dynamic
	I0819 12:05:11.085820  139391 command_runner.go:130] > BuildTags:      
	I0819 12:05:11.085826  139391 command_runner.go:130] >   containers_image_ostree_stub
	I0819 12:05:11.085831  139391 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 12:05:11.085835  139391 command_runner.go:130] >   btrfs_noversion
	I0819 12:05:11.085840  139391 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 12:05:11.085844  139391 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 12:05:11.085847  139391 command_runner.go:130] >   seccomp
	I0819 12:05:11.085852  139391 command_runner.go:130] > LDFlags:          unknown
	I0819 12:05:11.085856  139391 command_runner.go:130] > SeccompEnabled:   true
	I0819 12:05:11.085860  139391 command_runner.go:130] > AppArmorEnabled:  false
	I0819 12:05:11.085950  139391 ssh_runner.go:195] Run: crio --version
	I0819 12:05:11.117992  139391 command_runner.go:130] > crio version 1.29.1
	I0819 12:05:11.118022  139391 command_runner.go:130] > Version:        1.29.1
	I0819 12:05:11.118031  139391 command_runner.go:130] > GitCommit:      unknown
	I0819 12:05:11.118037  139391 command_runner.go:130] > GitCommitDate:  unknown
	I0819 12:05:11.118043  139391 command_runner.go:130] > GitTreeState:   clean
	I0819 12:05:11.118051  139391 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 12:05:11.118057  139391 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 12:05:11.118062  139391 command_runner.go:130] > Compiler:       gc
	I0819 12:05:11.118066  139391 command_runner.go:130] > Platform:       linux/amd64
	I0819 12:05:11.118073  139391 command_runner.go:130] > Linkmode:       dynamic
	I0819 12:05:11.118078  139391 command_runner.go:130] > BuildTags:      
	I0819 12:05:11.118090  139391 command_runner.go:130] >   containers_image_ostree_stub
	I0819 12:05:11.118096  139391 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 12:05:11.118102  139391 command_runner.go:130] >   btrfs_noversion
	I0819 12:05:11.118110  139391 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 12:05:11.118117  139391 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 12:05:11.118126  139391 command_runner.go:130] >   seccomp
	I0819 12:05:11.118133  139391 command_runner.go:130] > LDFlags:          unknown
	I0819 12:05:11.118141  139391 command_runner.go:130] > SeccompEnabled:   true
	I0819 12:05:11.118145  139391 command_runner.go:130] > AppArmorEnabled:  false
	I0819 12:05:11.120119  139391 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:05:11.121702  139391 main.go:141] libmachine: (multinode-320821) Calling .GetIP
	I0819 12:05:11.124732  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:05:11.125131  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:05:11.125162  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:05:11.125351  139391 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:05:11.129578  139391 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 12:05:11.129787  139391 kubeadm.go:883] updating cluster {Name:multinode-320821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-320821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.19 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:05:11.129942  139391 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:05:11.130002  139391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:05:11.170433  139391 command_runner.go:130] > {
	I0819 12:05:11.170459  139391 command_runner.go:130] >   "images": [
	I0819 12:05:11.170471  139391 command_runner.go:130] >     {
	I0819 12:05:11.170478  139391 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 12:05:11.170489  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170494  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 12:05:11.170499  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170502  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170512  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 12:05:11.170520  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 12:05:11.170525  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170532  139391 command_runner.go:130] >       "size": "87165492",
	I0819 12:05:11.170537  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.170542  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.170554  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.170561  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.170569  139391 command_runner.go:130] >     },
	I0819 12:05:11.170573  139391 command_runner.go:130] >     {
	I0819 12:05:11.170584  139391 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 12:05:11.170593  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170599  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 12:05:11.170610  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170615  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170621  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 12:05:11.170628  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 12:05:11.170634  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170639  139391 command_runner.go:130] >       "size": "87190579",
	I0819 12:05:11.170645  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.170657  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.170667  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.170680  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.170687  139391 command_runner.go:130] >     },
	I0819 12:05:11.170691  139391 command_runner.go:130] >     {
	I0819 12:05:11.170697  139391 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 12:05:11.170704  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170710  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 12:05:11.170715  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170720  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170730  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 12:05:11.170743  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 12:05:11.170753  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170763  139391 command_runner.go:130] >       "size": "1363676",
	I0819 12:05:11.170773  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.170782  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.170792  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.170799  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.170802  139391 command_runner.go:130] >     },
	I0819 12:05:11.170808  139391 command_runner.go:130] >     {
	I0819 12:05:11.170815  139391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 12:05:11.170821  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170829  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 12:05:11.170837  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170847  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170863  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 12:05:11.170882  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 12:05:11.170890  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170894  139391 command_runner.go:130] >       "size": "31470524",
	I0819 12:05:11.170899  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.170903  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.170909  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.170916  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.170924  139391 command_runner.go:130] >     },
	I0819 12:05:11.170933  139391 command_runner.go:130] >     {
	I0819 12:05:11.170946  139391 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 12:05:11.170955  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170966  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 12:05:11.170974  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170983  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170993  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 12:05:11.171007  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 12:05:11.171016  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171023  139391 command_runner.go:130] >       "size": "61245718",
	I0819 12:05:11.171033  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.171044  139391 command_runner.go:130] >       "username": "nonroot",
	I0819 12:05:11.171055  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171064  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171073  139391 command_runner.go:130] >     },
	I0819 12:05:11.171081  139391 command_runner.go:130] >     {
	I0819 12:05:11.171090  139391 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 12:05:11.171097  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171105  139391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 12:05:11.171114  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171121  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171135  139391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 12:05:11.171149  139391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 12:05:11.171157  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171167  139391 command_runner.go:130] >       "size": "149009664",
	I0819 12:05:11.171176  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171185  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.171191  139391 command_runner.go:130] >       },
	I0819 12:05:11.171196  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171205  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171212  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171222  139391 command_runner.go:130] >     },
	I0819 12:05:11.171231  139391 command_runner.go:130] >     {
	I0819 12:05:11.171243  139391 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 12:05:11.171253  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171263  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 12:05:11.171270  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171276  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171288  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 12:05:11.171302  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 12:05:11.171312  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171321  139391 command_runner.go:130] >       "size": "95233506",
	I0819 12:05:11.171330  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171339  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.171348  139391 command_runner.go:130] >       },
	I0819 12:05:11.171357  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171364  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171369  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171378  139391 command_runner.go:130] >     },
	I0819 12:05:11.171387  139391 command_runner.go:130] >     {
	I0819 12:05:11.171398  139391 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 12:05:11.171409  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171420  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 12:05:11.171429  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171439  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171460  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 12:05:11.171473  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 12:05:11.171486  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171494  139391 command_runner.go:130] >       "size": "89437512",
	I0819 12:05:11.171501  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171510  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.171516  139391 command_runner.go:130] >       },
	I0819 12:05:11.171522  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171528  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171534  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171539  139391 command_runner.go:130] >     },
	I0819 12:05:11.171546  139391 command_runner.go:130] >     {
	I0819 12:05:11.171555  139391 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 12:05:11.171561  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171568  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 12:05:11.171573  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171579  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171603  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 12:05:11.171613  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 12:05:11.171619  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171626  139391 command_runner.go:130] >       "size": "92728217",
	I0819 12:05:11.171633  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.171640  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171646  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171653  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171659  139391 command_runner.go:130] >     },
	I0819 12:05:11.171666  139391 command_runner.go:130] >     {
	I0819 12:05:11.171678  139391 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 12:05:11.171690  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171703  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 12:05:11.171711  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171721  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171747  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 12:05:11.171763  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 12:05:11.171772  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171780  139391 command_runner.go:130] >       "size": "68420936",
	I0819 12:05:11.171789  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171796  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.171803  139391 command_runner.go:130] >       },
	I0819 12:05:11.171812  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171817  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171823  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171831  139391 command_runner.go:130] >     },
	I0819 12:05:11.171837  139391 command_runner.go:130] >     {
	I0819 12:05:11.171850  139391 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 12:05:11.171859  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171869  139391 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 12:05:11.171877  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171885  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171898  139391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 12:05:11.171908  139391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 12:05:11.171917  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171927  139391 command_runner.go:130] >       "size": "742080",
	I0819 12:05:11.171932  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171944  139391 command_runner.go:130] >         "value": "65535"
	I0819 12:05:11.171953  139391 command_runner.go:130] >       },
	I0819 12:05:11.171962  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171971  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171980  139391 command_runner.go:130] >       "pinned": true
	I0819 12:05:11.171989  139391 command_runner.go:130] >     }
	I0819 12:05:11.171996  139391 command_runner.go:130] >   ]
	I0819 12:05:11.171999  139391 command_runner.go:130] > }
	I0819 12:05:11.172215  139391 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:05:11.172227  139391 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:05:11.172288  139391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:05:11.207439  139391 command_runner.go:130] > {
	I0819 12:05:11.207479  139391 command_runner.go:130] >   "images": [
	I0819 12:05:11.207484  139391 command_runner.go:130] >     {
	I0819 12:05:11.207492  139391 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 12:05:11.207497  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207503  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 12:05:11.207507  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207511  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207520  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 12:05:11.207528  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 12:05:11.207532  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207536  139391 command_runner.go:130] >       "size": "87165492",
	I0819 12:05:11.207540  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207544  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207552  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207556  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207560  139391 command_runner.go:130] >     },
	I0819 12:05:11.207564  139391 command_runner.go:130] >     {
	I0819 12:05:11.207569  139391 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 12:05:11.207573  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207578  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 12:05:11.207582  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207585  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207592  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 12:05:11.207599  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 12:05:11.207602  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207606  139391 command_runner.go:130] >       "size": "87190579",
	I0819 12:05:11.207613  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207622  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207626  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207631  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207637  139391 command_runner.go:130] >     },
	I0819 12:05:11.207640  139391 command_runner.go:130] >     {
	I0819 12:05:11.207645  139391 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 12:05:11.207649  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207654  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 12:05:11.207660  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207664  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207671  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 12:05:11.207680  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 12:05:11.207683  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207687  139391 command_runner.go:130] >       "size": "1363676",
	I0819 12:05:11.207691  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207695  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207709  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207717  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207739  139391 command_runner.go:130] >     },
	I0819 12:05:11.207743  139391 command_runner.go:130] >     {
	I0819 12:05:11.207749  139391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 12:05:11.207753  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207758  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 12:05:11.207762  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207766  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207775  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 12:05:11.207786  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 12:05:11.207790  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207794  139391 command_runner.go:130] >       "size": "31470524",
	I0819 12:05:11.207798  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207802  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207806  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207810  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207814  139391 command_runner.go:130] >     },
	I0819 12:05:11.207817  139391 command_runner.go:130] >     {
	I0819 12:05:11.207823  139391 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 12:05:11.207830  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207834  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 12:05:11.207838  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207843  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207851  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 12:05:11.207860  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 12:05:11.207864  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207867  139391 command_runner.go:130] >       "size": "61245718",
	I0819 12:05:11.207871  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207877  139391 command_runner.go:130] >       "username": "nonroot",
	I0819 12:05:11.207881  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207885  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207888  139391 command_runner.go:130] >     },
	I0819 12:05:11.207891  139391 command_runner.go:130] >     {
	I0819 12:05:11.207897  139391 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 12:05:11.207903  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207908  139391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 12:05:11.207912  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207917  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207926  139391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 12:05:11.207934  139391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 12:05:11.207940  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207943  139391 command_runner.go:130] >       "size": "149009664",
	I0819 12:05:11.207949  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.207953  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.207959  139391 command_runner.go:130] >       },
	I0819 12:05:11.207964  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207967  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207971  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207975  139391 command_runner.go:130] >     },
	I0819 12:05:11.207979  139391 command_runner.go:130] >     {
	I0819 12:05:11.207987  139391 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 12:05:11.207991  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207996  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 12:05:11.208002  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208006  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208013  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 12:05:11.208022  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 12:05:11.208026  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208031  139391 command_runner.go:130] >       "size": "95233506",
	I0819 12:05:11.208037  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.208042  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.208047  139391 command_runner.go:130] >       },
	I0819 12:05:11.208051  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208054  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208059  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.208062  139391 command_runner.go:130] >     },
	I0819 12:05:11.208066  139391 command_runner.go:130] >     {
	I0819 12:05:11.208072  139391 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 12:05:11.208076  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.208082  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 12:05:11.208087  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208091  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208105  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 12:05:11.208115  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 12:05:11.208121  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208125  139391 command_runner.go:130] >       "size": "89437512",
	I0819 12:05:11.208130  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.208136  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.208140  139391 command_runner.go:130] >       },
	I0819 12:05:11.208144  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208148  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208152  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.208156  139391 command_runner.go:130] >     },
	I0819 12:05:11.208159  139391 command_runner.go:130] >     {
	I0819 12:05:11.208165  139391 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 12:05:11.208171  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.208176  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 12:05:11.208180  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208184  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208192  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 12:05:11.208204  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 12:05:11.208210  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208214  139391 command_runner.go:130] >       "size": "92728217",
	I0819 12:05:11.208218  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.208223  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208227  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208231  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.208234  139391 command_runner.go:130] >     },
	I0819 12:05:11.208238  139391 command_runner.go:130] >     {
	I0819 12:05:11.208244  139391 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 12:05:11.208250  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.208256  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 12:05:11.208262  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208268  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208281  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 12:05:11.208293  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 12:05:11.208301  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208308  139391 command_runner.go:130] >       "size": "68420936",
	I0819 12:05:11.208317  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.208322  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.208326  139391 command_runner.go:130] >       },
	I0819 12:05:11.208330  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208335  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208342  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.208345  139391 command_runner.go:130] >     },
	I0819 12:05:11.208350  139391 command_runner.go:130] >     {
	I0819 12:05:11.208357  139391 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 12:05:11.208363  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.208368  139391 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 12:05:11.208375  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208381  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208395  139391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 12:05:11.208410  139391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 12:05:11.208418  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208422  139391 command_runner.go:130] >       "size": "742080",
	I0819 12:05:11.208426  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.208429  139391 command_runner.go:130] >         "value": "65535"
	I0819 12:05:11.208433  139391 command_runner.go:130] >       },
	I0819 12:05:11.208438  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208448  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208456  139391 command_runner.go:130] >       "pinned": true
	I0819 12:05:11.208464  139391 command_runner.go:130] >     }
	I0819 12:05:11.208477  139391 command_runner.go:130] >   ]
	I0819 12:05:11.208483  139391 command_runner.go:130] > }
	I0819 12:05:11.208673  139391 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:05:11.208699  139391 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:05:11.208709  139391 kubeadm.go:934] updating node { 192.168.39.88 8443 v1.31.0 crio true true} ...
	I0819 12:05:11.208819  139391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-320821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-320821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:05:11.208910  139391 ssh_runner.go:195] Run: crio config
	I0819 12:05:11.239968  139391 command_runner.go:130] ! time="2024-08-19 12:05:11.215325817Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 12:05:11.246394  139391 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 12:05:11.252046  139391 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 12:05:11.252071  139391 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 12:05:11.252077  139391 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 12:05:11.252081  139391 command_runner.go:130] > #
	I0819 12:05:11.252088  139391 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 12:05:11.252094  139391 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 12:05:11.252099  139391 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 12:05:11.252108  139391 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 12:05:11.252112  139391 command_runner.go:130] > # reload'.
	I0819 12:05:11.252118  139391 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 12:05:11.252124  139391 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 12:05:11.252133  139391 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 12:05:11.252139  139391 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 12:05:11.252143  139391 command_runner.go:130] > [crio]
	I0819 12:05:11.252149  139391 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 12:05:11.252155  139391 command_runner.go:130] > # containers images, in this directory.
	I0819 12:05:11.252159  139391 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 12:05:11.252167  139391 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 12:05:11.252171  139391 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 12:05:11.252179  139391 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 12:05:11.252184  139391 command_runner.go:130] > # imagestore = ""
	I0819 12:05:11.252190  139391 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 12:05:11.252200  139391 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 12:05:11.252206  139391 command_runner.go:130] > storage_driver = "overlay"
	I0819 12:05:11.252217  139391 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 12:05:11.252226  139391 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 12:05:11.252251  139391 command_runner.go:130] > storage_option = [
	I0819 12:05:11.252261  139391 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 12:05:11.252267  139391 command_runner.go:130] > ]
	I0819 12:05:11.252276  139391 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 12:05:11.252286  139391 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 12:05:11.252295  139391 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 12:05:11.252300  139391 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 12:05:11.252306  139391 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 12:05:11.252311  139391 command_runner.go:130] > # always happen on a node reboot
	I0819 12:05:11.252316  139391 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 12:05:11.252327  139391 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 12:05:11.252335  139391 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 12:05:11.252340  139391 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 12:05:11.252346  139391 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 12:05:11.252354  139391 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 12:05:11.252364  139391 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 12:05:11.252367  139391 command_runner.go:130] > # internal_wipe = true
	I0819 12:05:11.252378  139391 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 12:05:11.252389  139391 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 12:05:11.252396  139391 command_runner.go:130] > # internal_repair = false
	I0819 12:05:11.252408  139391 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 12:05:11.252419  139391 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 12:05:11.252427  139391 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 12:05:11.252432  139391 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 12:05:11.252443  139391 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 12:05:11.252449  139391 command_runner.go:130] > [crio.api]
	I0819 12:05:11.252454  139391 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 12:05:11.252461  139391 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 12:05:11.252466  139391 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 12:05:11.252478  139391 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 12:05:11.252492  139391 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 12:05:11.252502  139391 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 12:05:11.252511  139391 command_runner.go:130] > # stream_port = "0"
	I0819 12:05:11.252522  139391 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 12:05:11.252531  139391 command_runner.go:130] > # stream_enable_tls = false
	I0819 12:05:11.252542  139391 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 12:05:11.252548  139391 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 12:05:11.252558  139391 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 12:05:11.252566  139391 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 12:05:11.252574  139391 command_runner.go:130] > # minutes.
	I0819 12:05:11.252584  139391 command_runner.go:130] > # stream_tls_cert = ""
	I0819 12:05:11.252597  139391 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 12:05:11.252607  139391 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 12:05:11.252617  139391 command_runner.go:130] > # stream_tls_key = ""
	I0819 12:05:11.252627  139391 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 12:05:11.252639  139391 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 12:05:11.252658  139391 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 12:05:11.252664  139391 command_runner.go:130] > # stream_tls_ca = ""
	I0819 12:05:11.252672  139391 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 12:05:11.252682  139391 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 12:05:11.252696  139391 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 12:05:11.252706  139391 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 12:05:11.252717  139391 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 12:05:11.252728  139391 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 12:05:11.252737  139391 command_runner.go:130] > [crio.runtime]
	I0819 12:05:11.252747  139391 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 12:05:11.252760  139391 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 12:05:11.252767  139391 command_runner.go:130] > # "nofile=1024:2048"
	I0819 12:05:11.252776  139391 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 12:05:11.252790  139391 command_runner.go:130] > # default_ulimits = [
	I0819 12:05:11.252798  139391 command_runner.go:130] > # ]
	I0819 12:05:11.252808  139391 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 12:05:11.252818  139391 command_runner.go:130] > # no_pivot = false
	I0819 12:05:11.252830  139391 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 12:05:11.252842  139391 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 12:05:11.252853  139391 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 12:05:11.252866  139391 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 12:05:11.252875  139391 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 12:05:11.252883  139391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 12:05:11.252893  139391 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 12:05:11.252906  139391 command_runner.go:130] > # Cgroup setting for conmon
	I0819 12:05:11.252920  139391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 12:05:11.252929  139391 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 12:05:11.252942  139391 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 12:05:11.252952  139391 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 12:05:11.252969  139391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 12:05:11.252976  139391 command_runner.go:130] > conmon_env = [
	I0819 12:05:11.252983  139391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 12:05:11.252992  139391 command_runner.go:130] > ]
	I0819 12:05:11.253003  139391 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 12:05:11.253014  139391 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 12:05:11.253027  139391 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 12:05:11.253035  139391 command_runner.go:130] > # default_env = [
	I0819 12:05:11.253043  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253056  139391 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 12:05:11.253070  139391 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 12:05:11.253076  139391 command_runner.go:130] > # selinux = false
	I0819 12:05:11.253086  139391 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 12:05:11.253099  139391 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 12:05:11.253111  139391 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 12:05:11.253120  139391 command_runner.go:130] > # seccomp_profile = ""
	I0819 12:05:11.253132  139391 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 12:05:11.253143  139391 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 12:05:11.253155  139391 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 12:05:11.253162  139391 command_runner.go:130] > # which might increase security.
	I0819 12:05:11.253168  139391 command_runner.go:130] > # This option is currently deprecated,
	I0819 12:05:11.253180  139391 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 12:05:11.253191  139391 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 12:05:11.253202  139391 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 12:05:11.253215  139391 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 12:05:11.253228  139391 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 12:05:11.253241  139391 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 12:05:11.253252  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.253261  139391 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 12:05:11.253271  139391 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 12:05:11.253281  139391 command_runner.go:130] > # the cgroup blockio controller.
	I0819 12:05:11.253292  139391 command_runner.go:130] > # blockio_config_file = ""
	I0819 12:05:11.253306  139391 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 12:05:11.253317  139391 command_runner.go:130] > # blockio parameters.
	I0819 12:05:11.253326  139391 command_runner.go:130] > # blockio_reload = false
	I0819 12:05:11.253338  139391 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 12:05:11.253348  139391 command_runner.go:130] > # irqbalance daemon.
	I0819 12:05:11.253359  139391 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 12:05:11.253371  139391 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 12:05:11.253385  139391 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 12:05:11.253399  139391 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 12:05:11.253412  139391 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 12:05:11.253427  139391 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 12:05:11.253437  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.253448  139391 command_runner.go:130] > # rdt_config_file = ""
	I0819 12:05:11.253459  139391 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 12:05:11.253466  139391 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 12:05:11.253489  139391 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 12:05:11.253500  139391 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 12:05:11.253510  139391 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 12:05:11.253523  139391 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 12:05:11.253532  139391 command_runner.go:130] > # will be added.
	I0819 12:05:11.253541  139391 command_runner.go:130] > # default_capabilities = [
	I0819 12:05:11.253549  139391 command_runner.go:130] > # 	"CHOWN",
	I0819 12:05:11.253558  139391 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 12:05:11.253567  139391 command_runner.go:130] > # 	"FSETID",
	I0819 12:05:11.253574  139391 command_runner.go:130] > # 	"FOWNER",
	I0819 12:05:11.253577  139391 command_runner.go:130] > # 	"SETGID",
	I0819 12:05:11.253585  139391 command_runner.go:130] > # 	"SETUID",
	I0819 12:05:11.253594  139391 command_runner.go:130] > # 	"SETPCAP",
	I0819 12:05:11.253601  139391 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 12:05:11.253609  139391 command_runner.go:130] > # 	"KILL",
	I0819 12:05:11.253615  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253629  139391 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 12:05:11.253642  139391 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 12:05:11.253653  139391 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 12:05:11.253662  139391 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 12:05:11.253672  139391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 12:05:11.253677  139391 command_runner.go:130] > default_sysctls = [
	I0819 12:05:11.253687  139391 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 12:05:11.253696  139391 command_runner.go:130] > ]
	I0819 12:05:11.253703  139391 command_runner.go:130] > # List of devices on the host that a
	I0819 12:05:11.253721  139391 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 12:05:11.253733  139391 command_runner.go:130] > # allowed_devices = [
	I0819 12:05:11.253742  139391 command_runner.go:130] > # 	"/dev/fuse",
	I0819 12:05:11.253749  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253759  139391 command_runner.go:130] > # List of additional devices. specified as
	I0819 12:05:11.253771  139391 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 12:05:11.253780  139391 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 12:05:11.253799  139391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 12:05:11.253809  139391 command_runner.go:130] > # additional_devices = [
	I0819 12:05:11.253815  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253826  139391 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 12:05:11.253835  139391 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 12:05:11.253847  139391 command_runner.go:130] > # 	"/etc/cdi",
	I0819 12:05:11.253857  139391 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 12:05:11.253865  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253877  139391 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 12:05:11.253886  139391 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 12:05:11.253895  139391 command_runner.go:130] > # Defaults to false.
	I0819 12:05:11.253906  139391 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 12:05:11.253918  139391 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 12:05:11.253930  139391 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 12:05:11.253940  139391 command_runner.go:130] > # hooks_dir = [
	I0819 12:05:11.253950  139391 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 12:05:11.253958  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253968  139391 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 12:05:11.253977  139391 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 12:05:11.253988  139391 command_runner.go:130] > # its default mounts from the following two files:
	I0819 12:05:11.253997  139391 command_runner.go:130] > #
	I0819 12:05:11.254007  139391 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 12:05:11.254021  139391 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 12:05:11.254033  139391 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 12:05:11.254041  139391 command_runner.go:130] > #
	I0819 12:05:11.254053  139391 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 12:05:11.254066  139391 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 12:05:11.254075  139391 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 12:05:11.254085  139391 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 12:05:11.254093  139391 command_runner.go:130] > #
	I0819 12:05:11.254104  139391 command_runner.go:130] > # default_mounts_file = ""
	I0819 12:05:11.254115  139391 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 12:05:11.254128  139391 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 12:05:11.254137  139391 command_runner.go:130] > pids_limit = 1024
	I0819 12:05:11.254150  139391 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 12:05:11.254159  139391 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 12:05:11.254170  139391 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 12:05:11.254186  139391 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 12:05:11.254196  139391 command_runner.go:130] > # log_size_max = -1
	I0819 12:05:11.254210  139391 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 12:05:11.254223  139391 command_runner.go:130] > # log_to_journald = false
	I0819 12:05:11.254235  139391 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 12:05:11.254244  139391 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 12:05:11.254252  139391 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 12:05:11.254262  139391 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 12:05:11.254274  139391 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 12:05:11.254284  139391 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 12:05:11.254296  139391 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 12:05:11.254306  139391 command_runner.go:130] > # read_only = false
	I0819 12:05:11.254318  139391 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 12:05:11.254330  139391 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 12:05:11.254339  139391 command_runner.go:130] > # live configuration reload.
	I0819 12:05:11.254345  139391 command_runner.go:130] > # log_level = "info"
	I0819 12:05:11.254353  139391 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 12:05:11.254364  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.254374  139391 command_runner.go:130] > # log_filter = ""
	I0819 12:05:11.254384  139391 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 12:05:11.254399  139391 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 12:05:11.254409  139391 command_runner.go:130] > # separated by comma.
	I0819 12:05:11.254423  139391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:05:11.254434  139391 command_runner.go:130] > # uid_mappings = ""
	I0819 12:05:11.254445  139391 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 12:05:11.254454  139391 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 12:05:11.254463  139391 command_runner.go:130] > # separated by comma.
	I0819 12:05:11.254478  139391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:05:11.254488  139391 command_runner.go:130] > # gid_mappings = ""
	I0819 12:05:11.254498  139391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 12:05:11.254510  139391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 12:05:11.254523  139391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 12:05:11.254537  139391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:05:11.254545  139391 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 12:05:11.254553  139391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 12:05:11.254565  139391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 12:05:11.254578  139391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 12:05:11.254594  139391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:05:11.254607  139391 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 12:05:11.254619  139391 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 12:05:11.254631  139391 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 12:05:11.254643  139391 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 12:05:11.254649  139391 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 12:05:11.254656  139391 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 12:05:11.254669  139391 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 12:05:11.254680  139391 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 12:05:11.254688  139391 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 12:05:11.254697  139391 command_runner.go:130] > drop_infra_ctr = false
	I0819 12:05:11.254710  139391 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 12:05:11.254721  139391 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 12:05:11.254735  139391 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 12:05:11.254744  139391 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 12:05:11.254755  139391 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 12:05:11.254766  139391 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 12:05:11.254779  139391 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 12:05:11.254794  139391 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 12:05:11.254803  139391 command_runner.go:130] > # shared_cpuset = ""
	I0819 12:05:11.254816  139391 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 12:05:11.254826  139391 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 12:05:11.254837  139391 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 12:05:11.254847  139391 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 12:05:11.254856  139391 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 12:05:11.254868  139391 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 12:05:11.254881  139391 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 12:05:11.254890  139391 command_runner.go:130] > # enable_criu_support = false
	I0819 12:05:11.254902  139391 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 12:05:11.254914  139391 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 12:05:11.254923  139391 command_runner.go:130] > # enable_pod_events = false
	I0819 12:05:11.254933  139391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 12:05:11.254945  139391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 12:05:11.254957  139391 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 12:05:11.254968  139391 command_runner.go:130] > # default_runtime = "runc"
	I0819 12:05:11.254980  139391 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 12:05:11.254994  139391 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 12:05:11.255009  139391 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 12:05:11.255022  139391 command_runner.go:130] > # creation as a file is not desired either.
	I0819 12:05:11.255035  139391 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 12:05:11.255047  139391 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 12:05:11.255057  139391 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 12:05:11.255063  139391 command_runner.go:130] > # ]
	I0819 12:05:11.255076  139391 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 12:05:11.255089  139391 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 12:05:11.255101  139391 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 12:05:11.255112  139391 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 12:05:11.255119  139391 command_runner.go:130] > #
	I0819 12:05:11.255126  139391 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 12:05:11.255134  139391 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 12:05:11.255195  139391 command_runner.go:130] > # runtime_type = "oci"
	I0819 12:05:11.255212  139391 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 12:05:11.255217  139391 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 12:05:11.255223  139391 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 12:05:11.255233  139391 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 12:05:11.255243  139391 command_runner.go:130] > # monitor_env = []
	I0819 12:05:11.255254  139391 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 12:05:11.255264  139391 command_runner.go:130] > # allowed_annotations = []
	I0819 12:05:11.255277  139391 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 12:05:11.255285  139391 command_runner.go:130] > # Where:
	I0819 12:05:11.255296  139391 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 12:05:11.255307  139391 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 12:05:11.255320  139391 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 12:05:11.255333  139391 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 12:05:11.255343  139391 command_runner.go:130] > #   in $PATH.
	I0819 12:05:11.255356  139391 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 12:05:11.255367  139391 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 12:05:11.255380  139391 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 12:05:11.255388  139391 command_runner.go:130] > #   state.
	I0819 12:05:11.255399  139391 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 12:05:11.255409  139391 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 12:05:11.255422  139391 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 12:05:11.255434  139391 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 12:05:11.255448  139391 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 12:05:11.255461  139391 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 12:05:11.255479  139391 command_runner.go:130] > #   The currently recognized values are:
	I0819 12:05:11.255492  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 12:05:11.255502  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 12:05:11.255512  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 12:05:11.255524  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 12:05:11.255539  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 12:05:11.255552  139391 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 12:05:11.255566  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 12:05:11.255579  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 12:05:11.255592  139391 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 12:05:11.255603  139391 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 12:05:11.255611  139391 command_runner.go:130] > #   deprecated option "conmon".
	I0819 12:05:11.255621  139391 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 12:05:11.255633  139391 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 12:05:11.255643  139391 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 12:05:11.255655  139391 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 12:05:11.255668  139391 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 12:05:11.255679  139391 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 12:05:11.255689  139391 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 12:05:11.255700  139391 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 12:05:11.255706  139391 command_runner.go:130] > #
	I0819 12:05:11.255712  139391 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 12:05:11.255719  139391 command_runner.go:130] > #
	I0819 12:05:11.255744  139391 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 12:05:11.255758  139391 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 12:05:11.255766  139391 command_runner.go:130] > #
	I0819 12:05:11.255778  139391 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 12:05:11.255795  139391 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 12:05:11.255802  139391 command_runner.go:130] > #
	I0819 12:05:11.255809  139391 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 12:05:11.255817  139391 command_runner.go:130] > # feature.
	I0819 12:05:11.255826  139391 command_runner.go:130] > #
	I0819 12:05:11.255835  139391 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 12:05:11.255849  139391 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 12:05:11.255862  139391 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 12:05:11.255877  139391 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 12:05:11.255889  139391 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 12:05:11.255896  139391 command_runner.go:130] > #
	I0819 12:05:11.255902  139391 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 12:05:11.255914  139391 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 12:05:11.255923  139391 command_runner.go:130] > #
	I0819 12:05:11.255934  139391 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 12:05:11.255946  139391 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 12:05:11.255954  139391 command_runner.go:130] > #
	I0819 12:05:11.255967  139391 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 12:05:11.255979  139391 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 12:05:11.255988  139391 command_runner.go:130] > # limitation.
	I0819 12:05:11.255996  139391 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 12:05:11.256005  139391 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 12:05:11.256014  139391 command_runner.go:130] > runtime_type = "oci"
	I0819 12:05:11.256024  139391 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 12:05:11.256035  139391 command_runner.go:130] > runtime_config_path = ""
	I0819 12:05:11.256048  139391 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 12:05:11.256058  139391 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 12:05:11.256068  139391 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 12:05:11.256077  139391 command_runner.go:130] > monitor_env = [
	I0819 12:05:11.256085  139391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 12:05:11.256093  139391 command_runner.go:130] > ]
	I0819 12:05:11.256101  139391 command_runner.go:130] > privileged_without_host_devices = false
	I0819 12:05:11.256114  139391 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 12:05:11.256126  139391 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 12:05:11.256139  139391 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 12:05:11.256154  139391 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 12:05:11.256169  139391 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 12:05:11.256179  139391 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 12:05:11.256193  139391 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 12:05:11.256210  139391 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 12:05:11.256221  139391 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 12:05:11.256232  139391 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 12:05:11.256238  139391 command_runner.go:130] > # Example:
	I0819 12:05:11.256246  139391 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 12:05:11.256254  139391 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 12:05:11.256260  139391 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 12:05:11.256268  139391 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 12:05:11.256272  139391 command_runner.go:130] > # cpuset = 0
	I0819 12:05:11.256278  139391 command_runner.go:130] > # cpushares = "0-1"
	I0819 12:05:11.256283  139391 command_runner.go:130] > # Where:
	I0819 12:05:11.256291  139391 command_runner.go:130] > # The workload name is workload-type.
	I0819 12:05:11.256302  139391 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 12:05:11.256311  139391 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 12:05:11.256320  139391 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 12:05:11.256331  139391 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 12:05:11.256340  139391 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 12:05:11.256347  139391 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 12:05:11.256354  139391 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 12:05:11.256358  139391 command_runner.go:130] > # Default value is set to true
	I0819 12:05:11.256365  139391 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 12:05:11.256374  139391 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 12:05:11.256382  139391 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 12:05:11.256389  139391 command_runner.go:130] > # Default value is set to 'false'
	I0819 12:05:11.256400  139391 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 12:05:11.256413  139391 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 12:05:11.256421  139391 command_runner.go:130] > #
	I0819 12:05:11.256434  139391 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 12:05:11.256442  139391 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 12:05:11.256455  139391 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 12:05:11.256469  139391 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 12:05:11.256481  139391 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 12:05:11.256490  139391 command_runner.go:130] > [crio.image]
	I0819 12:05:11.256500  139391 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 12:05:11.256510  139391 command_runner.go:130] > # default_transport = "docker://"
	I0819 12:05:11.256522  139391 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 12:05:11.256533  139391 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 12:05:11.256541  139391 command_runner.go:130] > # global_auth_file = ""
	I0819 12:05:11.256549  139391 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 12:05:11.256561  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.256573  139391 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 12:05:11.256587  139391 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 12:05:11.256599  139391 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 12:05:11.256610  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.256623  139391 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 12:05:11.256634  139391 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 12:05:11.256648  139391 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 12:05:11.256660  139391 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 12:05:11.256671  139391 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 12:05:11.256680  139391 command_runner.go:130] > # pause_command = "/pause"
	I0819 12:05:11.256690  139391 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 12:05:11.256701  139391 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 12:05:11.256710  139391 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 12:05:11.256721  139391 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 12:05:11.256734  139391 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 12:05:11.256747  139391 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 12:05:11.256756  139391 command_runner.go:130] > # pinned_images = [
	I0819 12:05:11.256764  139391 command_runner.go:130] > # ]
	I0819 12:05:11.256777  139391 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 12:05:11.256792  139391 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 12:05:11.256803  139391 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 12:05:11.256818  139391 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 12:05:11.256830  139391 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 12:05:11.256837  139391 command_runner.go:130] > # signature_policy = ""
	I0819 12:05:11.256849  139391 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 12:05:11.256862  139391 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 12:05:11.256875  139391 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 12:05:11.256888  139391 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 12:05:11.256897  139391 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 12:05:11.256905  139391 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 12:05:11.256919  139391 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 12:05:11.256932  139391 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 12:05:11.256942  139391 command_runner.go:130] > # changing them here.
	I0819 12:05:11.256951  139391 command_runner.go:130] > # insecure_registries = [
	I0819 12:05:11.256960  139391 command_runner.go:130] > # ]
	I0819 12:05:11.256972  139391 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 12:05:11.256983  139391 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 12:05:11.256993  139391 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 12:05:11.257002  139391 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 12:05:11.257010  139391 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 12:05:11.257029  139391 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 12:05:11.257039  139391 command_runner.go:130] > # CNI plugins.
	I0819 12:05:11.257045  139391 command_runner.go:130] > [crio.network]
	I0819 12:05:11.257059  139391 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 12:05:11.257070  139391 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 12:05:11.257080  139391 command_runner.go:130] > # cni_default_network = ""
	I0819 12:05:11.257092  139391 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 12:05:11.257101  139391 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 12:05:11.257110  139391 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 12:05:11.257117  139391 command_runner.go:130] > # plugin_dirs = [
	I0819 12:05:11.257124  139391 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 12:05:11.257132  139391 command_runner.go:130] > # ]
	I0819 12:05:11.257142  139391 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 12:05:11.257152  139391 command_runner.go:130] > [crio.metrics]
	I0819 12:05:11.257162  139391 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 12:05:11.257172  139391 command_runner.go:130] > enable_metrics = true
	I0819 12:05:11.257182  139391 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 12:05:11.257194  139391 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 12:05:11.257204  139391 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 12:05:11.257214  139391 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 12:05:11.257226  139391 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 12:05:11.257236  139391 command_runner.go:130] > # metrics_collectors = [
	I0819 12:05:11.257243  139391 command_runner.go:130] > # 	"operations",
	I0819 12:05:11.257253  139391 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 12:05:11.257264  139391 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 12:05:11.257273  139391 command_runner.go:130] > # 	"operations_errors",
	I0819 12:05:11.257283  139391 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 12:05:11.257293  139391 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 12:05:11.257302  139391 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 12:05:11.257309  139391 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 12:05:11.257315  139391 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 12:05:11.257324  139391 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 12:05:11.257334  139391 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 12:05:11.257342  139391 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 12:05:11.257352  139391 command_runner.go:130] > # 	"containers_oom_total",
	I0819 12:05:11.257362  139391 command_runner.go:130] > # 	"containers_oom",
	I0819 12:05:11.257371  139391 command_runner.go:130] > # 	"processes_defunct",
	I0819 12:05:11.257380  139391 command_runner.go:130] > # 	"operations_total",
	I0819 12:05:11.257390  139391 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 12:05:11.257400  139391 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 12:05:11.257407  139391 command_runner.go:130] > # 	"operations_errors_total",
	I0819 12:05:11.257412  139391 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 12:05:11.257422  139391 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 12:05:11.257433  139391 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 12:05:11.257440  139391 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 12:05:11.257455  139391 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 12:05:11.257465  139391 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 12:05:11.257476  139391 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 12:05:11.257486  139391 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 12:05:11.257492  139391 command_runner.go:130] > # ]
	I0819 12:05:11.257501  139391 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 12:05:11.257507  139391 command_runner.go:130] > # metrics_port = 9090
	I0819 12:05:11.257515  139391 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 12:05:11.257527  139391 command_runner.go:130] > # metrics_socket = ""
	I0819 12:05:11.257539  139391 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 12:05:11.257552  139391 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 12:05:11.257566  139391 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 12:05:11.257576  139391 command_runner.go:130] > # certificate on any modification event.
	I0819 12:05:11.257585  139391 command_runner.go:130] > # metrics_cert = ""
	I0819 12:05:11.257591  139391 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 12:05:11.257598  139391 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 12:05:11.257607  139391 command_runner.go:130] > # metrics_key = ""
	I0819 12:05:11.257617  139391 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 12:05:11.257627  139391 command_runner.go:130] > [crio.tracing]
	I0819 12:05:11.257636  139391 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 12:05:11.257645  139391 command_runner.go:130] > # enable_tracing = false
	I0819 12:05:11.257654  139391 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 12:05:11.257664  139391 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 12:05:11.257676  139391 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 12:05:11.257684  139391 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 12:05:11.257691  139391 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 12:05:11.257699  139391 command_runner.go:130] > [crio.nri]
	I0819 12:05:11.257707  139391 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 12:05:11.257717  139391 command_runner.go:130] > # enable_nri = false
	I0819 12:05:11.257727  139391 command_runner.go:130] > # NRI socket to listen on.
	I0819 12:05:11.257739  139391 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 12:05:11.257748  139391 command_runner.go:130] > # NRI plugin directory to use.
	I0819 12:05:11.257758  139391 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 12:05:11.257768  139391 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 12:05:11.257777  139391 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 12:05:11.257793  139391 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 12:05:11.257804  139391 command_runner.go:130] > # nri_disable_connections = false
	I0819 12:05:11.257812  139391 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 12:05:11.257823  139391 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 12:05:11.257834  139391 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 12:05:11.257845  139391 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 12:05:11.257857  139391 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 12:05:11.257866  139391 command_runner.go:130] > [crio.stats]
	I0819 12:05:11.257879  139391 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 12:05:11.257891  139391 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 12:05:11.257902  139391 command_runner.go:130] > # stats_collection_period = 0
	I0819 12:05:11.258051  139391 cni.go:84] Creating CNI manager for ""
	I0819 12:05:11.258063  139391 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 12:05:11.258073  139391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:05:11.258104  139391 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.88 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-320821 NodeName:multinode-320821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:05:11.258260  139391 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-320821"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:05:11.258338  139391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:05:11.268868  139391 command_runner.go:130] > kubeadm
	I0819 12:05:11.268895  139391 command_runner.go:130] > kubectl
	I0819 12:05:11.268901  139391 command_runner.go:130] > kubelet
	I0819 12:05:11.268927  139391 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:05:11.268985  139391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:05:11.278864  139391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0819 12:05:11.296126  139391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:05:11.312836  139391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 12:05:11.330227  139391 ssh_runner.go:195] Run: grep 192.168.39.88	control-plane.minikube.internal$ /etc/hosts
	I0819 12:05:11.334469  139391 command_runner.go:130] > 192.168.39.88	control-plane.minikube.internal
	I0819 12:05:11.334582  139391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:05:11.474281  139391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:05:11.489147  139391 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821 for IP: 192.168.39.88
	I0819 12:05:11.489172  139391 certs.go:194] generating shared ca certs ...
	I0819 12:05:11.489197  139391 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:05:11.489375  139391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:05:11.489428  139391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:05:11.489442  139391 certs.go:256] generating profile certs ...
	I0819 12:05:11.489630  139391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/client.key
	I0819 12:05:11.489716  139391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.key.1a1a7689
	I0819 12:05:11.489759  139391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.key
	I0819 12:05:11.489774  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:05:11.489793  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:05:11.489810  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:05:11.489828  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:05:11.489844  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:05:11.489862  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:05:11.489884  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:05:11.489901  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:05:11.489974  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:05:11.490012  139391 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:05:11.490022  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:05:11.490055  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:05:11.490087  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:05:11.490115  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:05:11.490166  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:05:11.490199  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.490219  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.490237  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.491053  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:05:11.515953  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:05:11.540314  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:05:11.564381  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:05:11.589575  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 12:05:11.614661  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:05:11.639121  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:05:11.663016  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 12:05:11.687920  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:05:11.712519  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:05:11.736686  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:05:11.760798  139391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:05:11.777874  139391 ssh_runner.go:195] Run: openssl version
	I0819 12:05:11.783483  139391 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 12:05:11.783569  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:05:11.794459  139391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.798941  139391 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.798976  139391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.799020  139391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.804654  139391 command_runner.go:130] > 3ec20f2e
	I0819 12:05:11.804763  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:05:11.814213  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:05:11.825036  139391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.829930  139391 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.829976  139391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.830028  139391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.836189  139391 command_runner.go:130] > b5213941
	I0819 12:05:11.836280  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:05:11.845975  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:05:11.857080  139391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.863635  139391 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.863682  139391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.863758  139391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.870616  139391 command_runner.go:130] > 51391683
	I0819 12:05:11.870710  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:05:11.906188  139391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:05:11.910981  139391 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:05:11.911015  139391 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 12:05:11.911024  139391 command_runner.go:130] > Device: 253,1	Inode: 1056278     Links: 1
	I0819 12:05:11.911034  139391 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 12:05:11.911050  139391 command_runner.go:130] > Access: 2024-08-19 11:58:21.223937169 +0000
	I0819 12:05:11.911058  139391 command_runner.go:130] > Modify: 2024-08-19 11:58:21.223937169 +0000
	I0819 12:05:11.911067  139391 command_runner.go:130] > Change: 2024-08-19 11:58:21.223937169 +0000
	I0819 12:05:11.911074  139391 command_runner.go:130] >  Birth: 2024-08-19 11:58:21.223937169 +0000
	I0819 12:05:11.911179  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:05:11.920014  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.920163  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:05:11.927193  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.927327  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:05:11.945550  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.945662  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:05:11.977275  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.977372  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:05:11.986351  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.986480  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:05:11.997065  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.997140  139391 kubeadm.go:392] StartCluster: {Name:multinode-320821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-320821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.19 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:05:11.997261  139391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:05:11.997318  139391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:05:12.089865  139391 command_runner.go:130] > 9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d
	I0819 12:05:12.089908  139391 command_runner.go:130] > 8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff
	I0819 12:05:12.089918  139391 command_runner.go:130] > 821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f
	I0819 12:05:12.089929  139391 command_runner.go:130] > 16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f
	I0819 12:05:12.089935  139391 command_runner.go:130] > 457a1c6babc8a40635e0c66b4e681aae9f346f21e70e037eab570467dd84c619
	I0819 12:05:12.089940  139391 command_runner.go:130] > 3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174
	I0819 12:05:12.089946  139391 command_runner.go:130] > e24d6ca23b038638d4aa30410ff1b35fc0bca2d3cdbdf44468e2de01b598f959
	I0819 12:05:12.089953  139391 command_runner.go:130] > 91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994
	I0819 12:05:12.089982  139391 cri.go:89] found id: "9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d"
	I0819 12:05:12.089991  139391 cri.go:89] found id: "8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff"
	I0819 12:05:12.089994  139391 cri.go:89] found id: "821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f"
	I0819 12:05:12.089997  139391 cri.go:89] found id: "16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f"
	I0819 12:05:12.090000  139391 cri.go:89] found id: "457a1c6babc8a40635e0c66b4e681aae9f346f21e70e037eab570467dd84c619"
	I0819 12:05:12.090003  139391 cri.go:89] found id: "3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174"
	I0819 12:05:12.090006  139391 cri.go:89] found id: "e24d6ca23b038638d4aa30410ff1b35fc0bca2d3cdbdf44468e2de01b598f959"
	I0819 12:05:12.090009  139391 cri.go:89] found id: "91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994"
	I0819 12:05:12.090011  139391 cri.go:89] found id: ""
	I0819 12:05:12.090056  139391 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.694066050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069214694041445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8e06161-9b87-487a-abba-ef9ac6c9d8ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.694572858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6a95d22-2275-486c-9dbc-01bfbc0a2251 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.694632634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6a95d22-2275-486c-9dbc-01bfbc0a2251 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.694962962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c65bc9be79f6bb28386aad25d3c518f7993dd4e7fc0ded651a579bd226a21e3,PodSandboxId:2dee2b5f29bd311529135296b68308146071df02ec2cbe69458ed21e02ae3258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069151440580617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7,PodSandboxId:5e7bfdf817d635e8d64547426a53b5ea6f62c74856db067a7f5b1f5844004a2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724069117843916603,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28,PodSandboxId:13db2fe11ae92e7539de4a4ef99499eb398e3f5492aae8fce83ca293264671bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069117905314601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95,PodSandboxId:0ecf0f0a485198582e3e57adc87eace03490d0fd904fbd054f4966e4567abb76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724069117797909095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d3a7ba2d600d4ae49927eb69615fb3656fdd4b7b8646b8ed60706e2846c05e,PodSandboxId:e4d12cf76f6cf80e586bdc567f8aaa5c3a03c20ed0675398a8b5c4ab5f7bcb2d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069117724374251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069114616727025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069114618355937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be,PodSandboxId:6a8470ecef3c4bbe0afbaa09ad097be123c9580ffa611ae3a30e04156892df9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069112154105546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8,PodSandboxId:d910ec1f8211fbe2e23253112e6974521933f0db65d3e3e8470a791a1b7f30f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069112151951349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724069112118662945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724069112101558818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a4487882fdf97510aa26c8a63bf0a9cff43cd1b753731dcd86ef3ee10d9fda,PodSandboxId:71a902acf5f3a3225a11e0745ac28efd74020edc4921a53c46aa2a8fa686f63a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724068784030468860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d,PodSandboxId:7c5eedd9bea6b1c9f8de0fb57cb2d72b913b1d5335fb9fd5f46f7f1acfd1f9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724068731196453193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff,PodSandboxId:8eed3e1a479d2a6233444c1757b87c2826c6efc916138f94b2baf8374e1a65a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724068731176049339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f,PodSandboxId:7d94180b7a43f7458998f458273e52f4df8d577f672f8572d96f9517aa556e78,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724068719429242208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f,PodSandboxId:6c2448e0084ce4bd5016ad12939e878310178d35c704f948d6bcf0fa874c23cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724068716927547819,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174,PodSandboxId:d3dfabfedbc5351307fd792dc8f6be5f3a1e332fb14ac03b247fd5a3ead1a22e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724068705290622607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994,PodSandboxId:732035bad80065dc50e7eb2613259ffa26657cc179f49da4ada12b4d46f16848,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724068705236863470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6a95d22-2275-486c-9dbc-01bfbc0a2251 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.738902903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb4ac584-ebb3-4d17-b914-3d29b57bb8ff name=/runtime.v1.RuntimeService/Version
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.739013778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb4ac584-ebb3-4d17-b914-3d29b57bb8ff name=/runtime.v1.RuntimeService/Version
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.740370811Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=086c9838-fb7b-451b-a3f8-5dc3c034a09a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.741128260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069214741101478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=086c9838-fb7b-451b-a3f8-5dc3c034a09a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.742030557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ff81ab6-99c2-4559-9ab3-befd2dd03822 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.742116962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ff81ab6-99c2-4559-9ab3-befd2dd03822 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.742657870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c65bc9be79f6bb28386aad25d3c518f7993dd4e7fc0ded651a579bd226a21e3,PodSandboxId:2dee2b5f29bd311529135296b68308146071df02ec2cbe69458ed21e02ae3258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069151440580617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7,PodSandboxId:5e7bfdf817d635e8d64547426a53b5ea6f62c74856db067a7f5b1f5844004a2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724069117843916603,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28,PodSandboxId:13db2fe11ae92e7539de4a4ef99499eb398e3f5492aae8fce83ca293264671bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069117905314601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95,PodSandboxId:0ecf0f0a485198582e3e57adc87eace03490d0fd904fbd054f4966e4567abb76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724069117797909095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d3a7ba2d600d4ae49927eb69615fb3656fdd4b7b8646b8ed60706e2846c05e,PodSandboxId:e4d12cf76f6cf80e586bdc567f8aaa5c3a03c20ed0675398a8b5c4ab5f7bcb2d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069117724374251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069114616727025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069114618355937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be,PodSandboxId:6a8470ecef3c4bbe0afbaa09ad097be123c9580ffa611ae3a30e04156892df9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069112154105546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8,PodSandboxId:d910ec1f8211fbe2e23253112e6974521933f0db65d3e3e8470a791a1b7f30f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069112151951349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724069112118662945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724069112101558818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a4487882fdf97510aa26c8a63bf0a9cff43cd1b753731dcd86ef3ee10d9fda,PodSandboxId:71a902acf5f3a3225a11e0745ac28efd74020edc4921a53c46aa2a8fa686f63a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724068784030468860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d,PodSandboxId:7c5eedd9bea6b1c9f8de0fb57cb2d72b913b1d5335fb9fd5f46f7f1acfd1f9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724068731196453193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff,PodSandboxId:8eed3e1a479d2a6233444c1757b87c2826c6efc916138f94b2baf8374e1a65a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724068731176049339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f,PodSandboxId:7d94180b7a43f7458998f458273e52f4df8d577f672f8572d96f9517aa556e78,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724068719429242208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f,PodSandboxId:6c2448e0084ce4bd5016ad12939e878310178d35c704f948d6bcf0fa874c23cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724068716927547819,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174,PodSandboxId:d3dfabfedbc5351307fd792dc8f6be5f3a1e332fb14ac03b247fd5a3ead1a22e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724068705290622607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994,PodSandboxId:732035bad80065dc50e7eb2613259ffa26657cc179f49da4ada12b4d46f16848,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724068705236863470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ff81ab6-99c2-4559-9ab3-befd2dd03822 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.782266013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3788713d-3839-49c8-aab8-9dffcc0e70c7 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.782347262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3788713d-3839-49c8-aab8-9dffcc0e70c7 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.783255302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3f9131b-9de9-4133-9e55-ddf2bc23fc41 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.783738544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069214783716308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3f9131b-9de9-4133-9e55-ddf2bc23fc41 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.784255556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49f2082d-2a4b-4820-9bfb-3552eeef1fc3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.784329184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49f2082d-2a4b-4820-9bfb-3552eeef1fc3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.784695378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c65bc9be79f6bb28386aad25d3c518f7993dd4e7fc0ded651a579bd226a21e3,PodSandboxId:2dee2b5f29bd311529135296b68308146071df02ec2cbe69458ed21e02ae3258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069151440580617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7,PodSandboxId:5e7bfdf817d635e8d64547426a53b5ea6f62c74856db067a7f5b1f5844004a2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724069117843916603,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28,PodSandboxId:13db2fe11ae92e7539de4a4ef99499eb398e3f5492aae8fce83ca293264671bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069117905314601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95,PodSandboxId:0ecf0f0a485198582e3e57adc87eace03490d0fd904fbd054f4966e4567abb76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724069117797909095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d3a7ba2d600d4ae49927eb69615fb3656fdd4b7b8646b8ed60706e2846c05e,PodSandboxId:e4d12cf76f6cf80e586bdc567f8aaa5c3a03c20ed0675398a8b5c4ab5f7bcb2d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069117724374251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069114616727025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069114618355937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be,PodSandboxId:6a8470ecef3c4bbe0afbaa09ad097be123c9580ffa611ae3a30e04156892df9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069112154105546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8,PodSandboxId:d910ec1f8211fbe2e23253112e6974521933f0db65d3e3e8470a791a1b7f30f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069112151951349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724069112118662945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724069112101558818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a4487882fdf97510aa26c8a63bf0a9cff43cd1b753731dcd86ef3ee10d9fda,PodSandboxId:71a902acf5f3a3225a11e0745ac28efd74020edc4921a53c46aa2a8fa686f63a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724068784030468860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d,PodSandboxId:7c5eedd9bea6b1c9f8de0fb57cb2d72b913b1d5335fb9fd5f46f7f1acfd1f9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724068731196453193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff,PodSandboxId:8eed3e1a479d2a6233444c1757b87c2826c6efc916138f94b2baf8374e1a65a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724068731176049339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f,PodSandboxId:7d94180b7a43f7458998f458273e52f4df8d577f672f8572d96f9517aa556e78,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724068719429242208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f,PodSandboxId:6c2448e0084ce4bd5016ad12939e878310178d35c704f948d6bcf0fa874c23cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724068716927547819,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174,PodSandboxId:d3dfabfedbc5351307fd792dc8f6be5f3a1e332fb14ac03b247fd5a3ead1a22e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724068705290622607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994,PodSandboxId:732035bad80065dc50e7eb2613259ffa26657cc179f49da4ada12b4d46f16848,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724068705236863470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49f2082d-2a4b-4820-9bfb-3552eeef1fc3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.824750332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cc2e1ba-6bc4-488c-b816-74c9264616a6 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.824820722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cc2e1ba-6bc4-488c-b816-74c9264616a6 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.825935902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90137f61-be7e-43f0-a293-717eb0fe7b61 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.826372225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069214826351734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90137f61-be7e-43f0-a293-717eb0fe7b61 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.826866238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=776ee0f6-2d02-483b-884b-481e82a9c026 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.826930054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=776ee0f6-2d02-483b-884b-481e82a9c026 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:06:54 multinode-320821 crio[2750]: time="2024-08-19 12:06:54.827268680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c65bc9be79f6bb28386aad25d3c518f7993dd4e7fc0ded651a579bd226a21e3,PodSandboxId:2dee2b5f29bd311529135296b68308146071df02ec2cbe69458ed21e02ae3258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069151440580617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7,PodSandboxId:5e7bfdf817d635e8d64547426a53b5ea6f62c74856db067a7f5b1f5844004a2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724069117843916603,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28,PodSandboxId:13db2fe11ae92e7539de4a4ef99499eb398e3f5492aae8fce83ca293264671bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069117905314601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95,PodSandboxId:0ecf0f0a485198582e3e57adc87eace03490d0fd904fbd054f4966e4567abb76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724069117797909095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d3a7ba2d600d4ae49927eb69615fb3656fdd4b7b8646b8ed60706e2846c05e,PodSandboxId:e4d12cf76f6cf80e586bdc567f8aaa5c3a03c20ed0675398a8b5c4ab5f7bcb2d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069117724374251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069114616727025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069114618355937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be,PodSandboxId:6a8470ecef3c4bbe0afbaa09ad097be123c9580ffa611ae3a30e04156892df9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069112154105546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8,PodSandboxId:d910ec1f8211fbe2e23253112e6974521933f0db65d3e3e8470a791a1b7f30f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069112151951349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724069112118662945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724069112101558818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a4487882fdf97510aa26c8a63bf0a9cff43cd1b753731dcd86ef3ee10d9fda,PodSandboxId:71a902acf5f3a3225a11e0745ac28efd74020edc4921a53c46aa2a8fa686f63a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724068784030468860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d,PodSandboxId:7c5eedd9bea6b1c9f8de0fb57cb2d72b913b1d5335fb9fd5f46f7f1acfd1f9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724068731196453193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff,PodSandboxId:8eed3e1a479d2a6233444c1757b87c2826c6efc916138f94b2baf8374e1a65a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724068731176049339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f,PodSandboxId:7d94180b7a43f7458998f458273e52f4df8d577f672f8572d96f9517aa556e78,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724068719429242208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f,PodSandboxId:6c2448e0084ce4bd5016ad12939e878310178d35c704f948d6bcf0fa874c23cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724068716927547819,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174,PodSandboxId:d3dfabfedbc5351307fd792dc8f6be5f3a1e332fb14ac03b247fd5a3ead1a22e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724068705290622607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994,PodSandboxId:732035bad80065dc50e7eb2613259ffa26657cc179f49da4ada12b4d46f16848,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724068705236863470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=776ee0f6-2d02-483b-884b-481e82a9c026 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9c65bc9be79f6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   2dee2b5f29bd3       busybox-7dff88458-kjbkv
	07d46285b3356       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   13db2fe11ae92       coredns-6f6b679f8f-qfdh2
	d2b8e14212832       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   5e7bfdf817d63       kindnet-2k549
	09611480af58a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   0ecf0f0a48519       kube-proxy-kjdfp
	e4d3a7ba2d600       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   e4d12cf76f6cf       storage-provisioner
	1459312e96b1b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            2                   46c1fa2472ada       kube-apiserver-multinode-320821
	e16be7aeb3a43       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   2                   6c177bf2d7c23       kube-controller-manager-multinode-320821
	aa19842caefbb       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   6a8470ecef3c4       kube-scheduler-multinode-320821
	99df12b846dbb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   d910ec1f8211f       etcd-multinode-320821
	e500f6d0181c7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Exited              kube-apiserver            1                   46c1fa2472ada       kube-apiserver-multinode-320821
	eea00494de0eb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Exited              kube-controller-manager   1                   6c177bf2d7c23       kube-controller-manager-multinode-320821
	b1a4487882fdf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   71a902acf5f3a       busybox-7dff88458-kjbkv
	9a56e4a12c9aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   7c5eedd9bea6b       coredns-6f6b679f8f-qfdh2
	8aecdda5f9f76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   8eed3e1a479d2       storage-provisioner
	821073acd978a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   7d94180b7a43f       kindnet-2k549
	16f3a27e9da94       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   6c2448e0084ce       kube-proxy-kjdfp
	3c5f6b536a88e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   d3dfabfedbc53       etcd-multinode-320821
	91cf874d8d0dc       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   732035bad8006       kube-scheduler-multinode-320821
	
	
	==> coredns [07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60634 - 35121 "HINFO IN 810219209637135985.1649414912979778859. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020888118s
	
	
	==> coredns [9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d] <==
	[INFO] 10.244.0.3:51826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002004814s
	[INFO] 10.244.0.3:46473 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013555s
	[INFO] 10.244.0.3:58803 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058241s
	[INFO] 10.244.0.3:53867 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001380791s
	[INFO] 10.244.0.3:46837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000055562s
	[INFO] 10.244.0.3:40851 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066989s
	[INFO] 10.244.0.3:50911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051093s
	[INFO] 10.244.1.2:45446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128004s
	[INFO] 10.244.1.2:40145 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072366s
	[INFO] 10.244.1.2:36494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057146s
	[INFO] 10.244.1.2:59410 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006181s
	[INFO] 10.244.0.3:59393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221636s
	[INFO] 10.244.0.3:35642 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098423s
	[INFO] 10.244.0.3:46741 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007295s
	[INFO] 10.244.0.3:34925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063971s
	[INFO] 10.244.1.2:49528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186733s
	[INFO] 10.244.1.2:53532 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099828s
	[INFO] 10.244.1.2:60063 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135981s
	[INFO] 10.244.1.2:41718 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073492s
	[INFO] 10.244.0.3:52028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110627s
	[INFO] 10.244.0.3:42457 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093285s
	[INFO] 10.244.0.3:49187 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079584s
	[INFO] 10.244.0.3:40837 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101477s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-320821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-320821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=multinode-320821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_58_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:58:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-320821
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:06:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:05:16 +0000   Mon, 19 Aug 2024 11:58:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:05:16 +0000   Mon, 19 Aug 2024 11:58:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:05:16 +0000   Mon, 19 Aug 2024 11:58:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:05:16 +0000   Mon, 19 Aug 2024 11:58:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-320821
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d0023682c8b48b8b70516eeb6bb51ff
	  System UUID:                8d002368-2c8b-48b8-b705-16eeb6bb51ff
	  Boot ID:                    697c49aa-d957-4a31-8dc7-082016d87e90
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kjbkv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 coredns-6f6b679f8f-qfdh2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m20s
	  kube-system                 etcd-multinode-320821                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m26s
	  kube-system                 kindnet-2k549                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m20s
	  kube-system                 kube-apiserver-multinode-320821             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-controller-manager-multinode-320821    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-proxy-kjdfp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-scheduler-multinode-320821             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m17s                kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m25s                kubelet          Node multinode-320821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m25s                kubelet          Node multinode-320821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s                kubelet          Node multinode-320821 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m25s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m21s                node-controller  Node multinode-320821 event: Registered Node multinode-320821 in Controller
	  Normal  NodeReady                8m5s                 kubelet          Node multinode-320821 status is now: NodeReady
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node multinode-320821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node multinode-320821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node multinode-320821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                  node-controller  Node multinode-320821 event: Registered Node multinode-320821 in Controller
	
	
	Name:               multinode-320821-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-320821-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=multinode-320821
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_05_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:05:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-320821-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:06:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:06:26 +0000   Mon, 19 Aug 2024 12:05:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:06:26 +0000   Mon, 19 Aug 2024 12:05:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:06:26 +0000   Mon, 19 Aug 2024 12:05:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:06:26 +0000   Mon, 19 Aug 2024 12:06:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-320821-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 edb5d3af9cb941e2adfe6ed1ee25cd2e
	  System UUID:                edb5d3af-9cb9-41e2-adfe-6ed1ee25cd2e
	  Boot ID:                    3a87b408-22fc-4621-9ac5-4df7c8574c8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5k84p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kindnet-nxv2m              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m36s
	  kube-system                 kube-proxy-sg6jr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  Starting                 7m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m36s)  kubelet          Node multinode-320821-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m36s)  kubelet          Node multinode-320821-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m36s)  kubelet          Node multinode-320821-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m16s                  kubelet          Node multinode-320821-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet          Node multinode-320821-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet          Node multinode-320821-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet          Node multinode-320821-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           56s                    node-controller  Node multinode-320821-m02 event: Registered Node multinode-320821-m02 in Controller
	  Normal  NodeReady                41s                    kubelet          Node multinode-320821-m02 status is now: NodeReady
	
	
	Name:               multinode-320821-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-320821-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=multinode-320821
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_06_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:06:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-320821-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:06:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:06:51 +0000   Mon, 19 Aug 2024 12:06:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:06:51 +0000   Mon, 19 Aug 2024 12:06:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:06:51 +0000   Mon, 19 Aug 2024 12:06:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:06:51 +0000   Mon, 19 Aug 2024 12:06:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    multinode-320821-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9008b4c1b3cd45b4bff2c58f2cf1cb1f
	  System UUID:                9008b4c1-b3cd-45b4-bff2-c58f2cf1cb1f
	  Boot ID:                    7b3346ef-919d-45b3-8f21-4010262e5da8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dvqkr       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m41s
	  kube-system                 kube-proxy-bvdxj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m35s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m46s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m41s (x2 over 6m41s)  kubelet     Node multinode-320821-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x2 over 6m41s)  kubelet     Node multinode-320821-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x2 over 6m41s)  kubelet     Node multinode-320821-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m21s                  kubelet     Node multinode-320821-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet     Node multinode-320821-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet     Node multinode-320821-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet     Node multinode-320821-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m32s                  kubelet     Node multinode-320821-m03 status is now: NodeReady
	  Normal  Starting                 22s                    kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x2 over 22s)      kubelet     Node multinode-320821-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 22s)      kubelet     Node multinode-320821-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 22s)      kubelet     Node multinode-320821-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4s                     kubelet     Node multinode-320821-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.050547] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.186868] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.119222] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.272560] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +3.987950] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.415121] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.058127] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.478399] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.091256] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.110170] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.141556] kauditd_printk_skb: 18 callbacks suppressed
	[ +15.369666] kauditd_printk_skb: 69 callbacks suppressed
	[Aug19 11:59] kauditd_printk_skb: 12 callbacks suppressed
	[Aug19 12:04] systemd-fstab-generator[2669]: Ignoring "noauto" option for root device
	[Aug19 12:05] systemd-fstab-generator[2681]: Ignoring "noauto" option for root device
	[  +0.168617] systemd-fstab-generator[2695]: Ignoring "noauto" option for root device
	[  +0.135374] systemd-fstab-generator[2707]: Ignoring "noauto" option for root device
	[  +0.302569] systemd-fstab-generator[2735]: Ignoring "noauto" option for root device
	[ +10.496841] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.085329] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.462336] systemd-fstab-generator[3261]: Ignoring "noauto" option for root device
	[  +3.734685] kauditd_printk_skb: 88 callbacks suppressed
	[ +13.527636] systemd-fstab-generator[3922]: Ignoring "noauto" option for root device
	[  +0.092846] kauditd_printk_skb: 34 callbacks suppressed
	[ +20.090215] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174] <==
	{"level":"info","ts":"2024-08-19T11:58:26.325296Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:58:26.328274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T11:58:26.345302Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:58:26.351235Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.88:2379"}
	{"level":"info","ts":"2024-08-19T11:58:26.354938Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T11:58:26.356569Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T11:59:19.556144Z","caller":"traceutil/trace.go:171","msg":"trace[1761422469] linearizableReadLoop","detail":"{readStateIndex:461; appliedIndex:460; }","duration":"141.883131ms","start":"2024-08-19T11:59:19.414243Z","end":"2024-08-19T11:59:19.556126Z","steps":["trace[1761422469] 'read index received'  (duration: 122.825883ms)","trace[1761422469] 'applied index is now lower than readState.Index'  (duration: 19.05682ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T11:59:19.556249Z","caller":"traceutil/trace.go:171","msg":"trace[684930308] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"142.576536ms","start":"2024-08-19T11:59:19.413665Z","end":"2024-08-19T11:59:19.556242Z","steps":["trace[684930308] 'process raft request'  (duration: 123.44387ms)","trace[684930308] 'compare'  (duration: 18.950997ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:59:19.556612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.253782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-320821-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:59:19.556677Z","caller":"traceutil/trace.go:171","msg":"trace[443472046] range","detail":"{range_begin:/registry/minions/multinode-320821-m02; range_end:; response_count:0; response_revision:441; }","duration":"142.434123ms","start":"2024-08-19T11:59:19.414233Z","end":"2024-08-19T11:59:19.556667Z","steps":["trace[443472046] 'agreement among raft nodes before linearized reading'  (duration: 142.199035ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:59:19.556817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.077221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.88\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-08-19T11:59:19.556844Z","caller":"traceutil/trace.go:171","msg":"trace[2144411410] range","detail":"{range_begin:/registry/masterleases/192.168.39.88; range_end:; response_count:1; response_revision:441; }","duration":"119.106314ms","start":"2024-08-19T11:59:19.437732Z","end":"2024-08-19T11:59:19.556839Z","steps":["trace[2144411410] 'agreement among raft nodes before linearized reading'  (duration: 119.058549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:00:14.737620Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.664633ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16262376666070078341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-320821-m03.17ed1f73c629124f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-320821-m03.17ed1f73c629124f\" value_size:642 lease:7039004629215302234 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T12:00:14.738175Z","caller":"traceutil/trace.go:171","msg":"trace[137775970] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"257.060325ms","start":"2024-08-19T12:00:14.481102Z","end":"2024-08-19T12:00:14.738162Z","steps":["trace[137775970] 'process raft request'  (duration: 75.389352ms)","trace[137775970] 'compare'  (duration: 180.582195ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T12:00:21.199850Z","caller":"traceutil/trace.go:171","msg":"trace[185200115] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"236.387522ms","start":"2024-08-19T12:00:20.963442Z","end":"2024-08-19T12:00:21.199829Z","steps":["trace[185200115] 'process raft request'  (duration: 194.409482ms)","trace[185200115] 'compare'  (duration: 41.885571ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T12:03:28.870132Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T12:03:28.870196Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-320821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	{"level":"warn","ts":"2024-08-19T12:03:28.870269Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:03:28.871117Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:03:28.955947Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:03:28.955993Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T12:03:28.956057Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aa0bd43d5988e1af","current-leader-member-id":"aa0bd43d5988e1af"}
	{"level":"info","ts":"2024-08-19T12:03:28.958747Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-08-19T12:03:28.958875Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-08-19T12:03:28.958899Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-320821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	
	
	==> etcd [99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8] <==
	{"level":"info","ts":"2024-08-19T12:05:12.561576Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9f9d2ecdb39156b6","local-member-id":"aa0bd43d5988e1af","added-peer-id":"aa0bd43d5988e1af","added-peer-peer-urls":["https://192.168.39.88:2380"]}
	{"level":"info","ts":"2024-08-19T12:05:12.561723Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f9d2ecdb39156b6","local-member-id":"aa0bd43d5988e1af","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:05:12.561779Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:05:12.581245Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:05:12.581457Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aa0bd43d5988e1af","initial-advertise-peer-urls":["https://192.168.39.88:2380"],"listen-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T12:05:12.581495Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T12:05:12.581628Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-08-19T12:05:12.581651Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-08-19T12:05:14.246032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T12:05:14.246092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:05:14.246115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgPreVoteResp from aa0bd43d5988e1af at term 2"}
	{"level":"info","ts":"2024-08-19T12:05:14.246128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:05:14.246135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgVoteResp from aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-08-19T12:05:14.246144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:05:14.246150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aa0bd43d5988e1af elected leader aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-08-19T12:05:14.250094Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aa0bd43d5988e1af","local-member-attributes":"{Name:multinode-320821 ClientURLs:[https://192.168.39.88:2379]}","request-path":"/0/members/aa0bd43d5988e1af/attributes","cluster-id":"9f9d2ecdb39156b6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:05:14.250252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:05:14.250501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:05:14.251181Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:05:14.251997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:05:14.252560Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:05:14.253230Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.88:2379"}
	{"level":"info","ts":"2024-08-19T12:05:14.255097Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:05:14.255119Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:06:03.626026Z","caller":"traceutil/trace.go:171","msg":"trace[1061296087] transaction","detail":"{read_only:false; response_revision:1033; number_of_response:1; }","duration":"200.966712ms","start":"2024-08-19T12:06:03.425046Z","end":"2024-08-19T12:06:03.626012Z","steps":["trace[1061296087] 'process raft request'  (duration: 200.8456ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:06:55 up 8 min,  0 users,  load average: 0.35, 0.32, 0.18
	Linux multinode-320821 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f] <==
	I0819 12:02:40.429306       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:02:50.429310       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:02:50.429347       1 main.go:299] handling current node
	I0819 12:02:50.429366       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:02:50.429373       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:02:50.429574       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:02:50.429604       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:03:00.431392       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:03:00.431558       1 main.go:299] handling current node
	I0819 12:03:00.431597       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:03:00.431620       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:03:00.431782       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:03:00.431808       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:03:10.434115       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:03:10.434145       1 main.go:299] handling current node
	I0819 12:03:10.434159       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:03:10.434164       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:03:10.434311       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:03:10.434340       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:03:20.432646       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:03:20.432678       1 main.go:299] handling current node
	I0819 12:03:20.432693       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:03:20.432698       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:03:20.432821       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:03:20.432826       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7] <==
	I0819 12:06:08.640823       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:06:18.639388       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:06:18.639417       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:06:18.639583       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:06:18.639613       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:06:18.639691       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:06:18.640207       1 main.go:299] handling current node
	I0819 12:06:28.640257       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:06:28.640316       1 main.go:299] handling current node
	I0819 12:06:28.640330       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:06:28.640339       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:06:28.640448       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:06:28.640469       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:06:38.641404       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:06:38.641560       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.2.0/24] 
	I0819 12:06:38.641725       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:06:38.641816       1 main.go:299] handling current node
	I0819 12:06:38.641850       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:06:38.641870       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:06:48.641712       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:06:48.641871       1 main.go:299] handling current node
	I0819 12:06:48.641926       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:06:48.641964       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:06:48.642118       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:06:48.642150       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16] <==
	I0819 12:05:16.505740       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:05:16.505777       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:05:16.506146       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:05:16.506367       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:05:16.508911       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:05:16.510428       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:05:16.510778       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:05:16.511327       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:05:16.512008       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:05:16.512085       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:05:16.512111       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:05:16.513374       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:05:16.521571       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 12:05:16.589431       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:05:16.592815       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:05:16.592849       1 policy_source.go:224] refreshing policies
	I0819 12:05:16.633911       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:05:17.407956       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:05:18.779074       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:05:18.928469       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:05:18.942981       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:05:19.022742       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:05:19.034801       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:05:19.826869       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:05:20.120796       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095] <==
	
	
	==> kube-controller-manager [e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f] <==
	I0819 12:06:14.967402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m02"
	I0819 12:06:14.978069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m02"
	I0819 12:06:14.983446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.564µs"
	I0819 12:06:15.005816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.705µs"
	I0819 12:06:17.192495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.820142ms"
	I0819 12:06:17.192898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.87µs"
	I0819 12:06:19.902413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m02"
	I0819 12:06:26.633883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m02"
	I0819 12:06:32.871166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:32.888899       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:33.126714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:33.126958       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-320821-m02"
	I0819 12:06:34.117708       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-320821-m03\" does not exist"
	I0819 12:06:34.117924       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-320821-m02"
	I0819 12:06:34.150327       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-320821-m03" podCIDRs=["10.244.2.0/24"]
	I0819 12:06:34.151055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:34.151180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:34.161074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:34.488547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:34.991945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:44.422753       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:51.907128       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-320821-m03"
	I0819 12:06:51.907307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:51.918890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:54.915279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	
	
	==> kube-controller-manager [eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3] <==
	
	
	==> kube-proxy [09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:05:18.110824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:05:18.125091       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	E0819 12:05:18.125248       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:05:18.165945       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:05:18.166005       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:05:18.166033       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:05:18.169350       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:05:18.169626       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:05:18.169648       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:05:18.171112       1 config.go:197] "Starting service config controller"
	I0819 12:05:18.171135       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:05:18.171151       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:05:18.171155       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:05:18.171503       1 config.go:326] "Starting node config controller"
	I0819 12:05:18.171563       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:05:18.271303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:05:18.271361       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:05:18.271675       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 11:58:37.080938       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 11:58:37.092865       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	E0819 11:58:37.092926       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:58:37.127211       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 11:58:37.127260       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 11:58:37.127319       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:58:37.130138       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:58:37.130405       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:58:37.130429       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:58:37.131799       1 config.go:197] "Starting service config controller"
	I0819 11:58:37.131847       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:58:37.131868       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:58:37.131872       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:58:37.132323       1 config.go:326] "Starting node config controller"
	I0819 11:58:37.132351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:58:37.231956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 11:58:37.232018       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:58:37.232464       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994] <==
	E0819 11:58:27.802376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:27.802420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:58:27.802445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.615697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:58:28.615867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.626250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:58:28.626348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.646865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:58:28.647399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.652970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 11:58:28.653580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.668729       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 11:58:28.669563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.874798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:58:28.874845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.984289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 11:58:28.984336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:29.014738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:58:29.014915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:29.041902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 11:58:29.042084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:29.070952       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:58:29.071075       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 11:58:32.292055       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:03:28.865277       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be] <==
	W0819 12:05:14.159224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.88:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.159279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.88:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.213032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.88:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.213089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.88:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.280051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.88:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.280106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.88:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.288911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.88:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.288966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.88:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.322754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.88:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.322817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.88:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.375112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.88:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.375201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.88:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.386877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.88:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.386943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.88:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.471245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.88:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.471383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.88:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.557812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.88:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.557899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.88:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.642295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.88:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.642377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.88:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:16.466118       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:05:16.466283       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 12:05:16.511722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:05:16.511769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 12:05:21.129196       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:05:24 multinode-320821 kubelet[3268]: E0819 12:05:24.231627    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069124231241309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:24 multinode-320821 kubelet[3268]: E0819 12:05:24.231685    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069124231241309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:34 multinode-320821 kubelet[3268]: E0819 12:05:34.233674    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069134233054666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:34 multinode-320821 kubelet[3268]: E0819 12:05:34.233701    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069134233054666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:44 multinode-320821 kubelet[3268]: E0819 12:05:44.235741    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069144234928746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:44 multinode-320821 kubelet[3268]: E0819 12:05:44.236105    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069144234928746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:54 multinode-320821 kubelet[3268]: E0819 12:05:54.237914    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069154237421108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:05:54 multinode-320821 kubelet[3268]: E0819 12:05:54.237942    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069154237421108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:04 multinode-320821 kubelet[3268]: E0819 12:06:04.240076    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069164239392028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:04 multinode-320821 kubelet[3268]: E0819 12:06:04.240349    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069164239392028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:14 multinode-320821 kubelet[3268]: E0819 12:06:14.227609    3268 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:06:14 multinode-320821 kubelet[3268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:06:14 multinode-320821 kubelet[3268]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:06:14 multinode-320821 kubelet[3268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:06:14 multinode-320821 kubelet[3268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:06:14 multinode-320821 kubelet[3268]: E0819 12:06:14.242577    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069174241914931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:14 multinode-320821 kubelet[3268]: E0819 12:06:14.242603    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069174241914931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:24 multinode-320821 kubelet[3268]: E0819 12:06:24.244584    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069184244166084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:24 multinode-320821 kubelet[3268]: E0819 12:06:24.244643    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069184244166084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:34 multinode-320821 kubelet[3268]: E0819 12:06:34.247832    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069194247191866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:34 multinode-320821 kubelet[3268]: E0819 12:06:34.247872    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069194247191866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:44 multinode-320821 kubelet[3268]: E0819 12:06:44.250339    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069204249126419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:44 multinode-320821 kubelet[3268]: E0819 12:06:44.250419    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069204249126419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:54 multinode-320821 kubelet[3268]: E0819 12:06:54.252350    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069214252036170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:06:54 multinode-320821 kubelet[3268]: E0819 12:06:54.252379    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069214252036170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:06:54.428293  140538 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19476-99410/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-320821 -n multinode-320821
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-320821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (330.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 stop
E0819 12:08:35.348528  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-320821 stop: exit status 82 (2m0.477727559s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-320821-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-320821 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-320821 status: exit status 3 (18.645379201s)

                                                
                                                
-- stdout --
	multinode-320821
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-320821-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:09:17.388072  141188 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.69:22: connect: no route to host
	E0819 12:09:17.388118  141188 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.69:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-320821 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-320821 -n multinode-320821
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-320821 logs -n 25: (1.417635765s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m02:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821:/home/docker/cp-test_multinode-320821-m02_multinode-320821.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821 sudo cat                                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m02_multinode-320821.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m02:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03:/home/docker/cp-test_multinode-320821-m02_multinode-320821-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821-m03 sudo cat                                   | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m02_multinode-320821-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp testdata/cp-test.txt                                                | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile690601289/001/cp-test_multinode-320821-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821:/home/docker/cp-test_multinode-320821-m03_multinode-320821.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821 sudo cat                                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m03_multinode-320821.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt                       | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02:/home/docker/cp-test_multinode-320821-m03_multinode-320821-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821-m02 sudo cat                                   | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m03_multinode-320821-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-320821 node stop m03                                                          | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	| node    | multinode-320821 node start                                                             | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-320821                                                                | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC |                     |
	| stop    | -p multinode-320821                                                                     | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC |                     |
	| start   | -p multinode-320821                                                                     | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-320821                                                                | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:06 UTC |                     |
	| node    | multinode-320821 node delete                                                            | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:06 UTC | 19 Aug 24 12:06 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-320821 stop                                                                   | multinode-320821 | jenkins | v1.33.1 | 19 Aug 24 12:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:03:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:03:27.535612  139391 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:03:27.535759  139391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:03:27.535769  139391 out.go:358] Setting ErrFile to fd 2...
	I0819 12:03:27.535773  139391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:03:27.535997  139391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:03:27.536555  139391 out.go:352] Setting JSON to false
	I0819 12:03:27.537521  139391 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6354,"bootTime":1724062654,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:03:27.537584  139391 start.go:139] virtualization: kvm guest
	I0819 12:03:27.539689  139391 out.go:177] * [multinode-320821] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:03:27.541284  139391 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:03:27.541291  139391 notify.go:220] Checking for updates...
	I0819 12:03:27.542668  139391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:03:27.544095  139391 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:03:27.545709  139391 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:03:27.547359  139391 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:03:27.548555  139391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:03:27.550154  139391 config.go:182] Loaded profile config "multinode-320821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:03:27.550250  139391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:03:27.550677  139391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:03:27.550723  139391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:03:27.566094  139391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 12:03:27.566602  139391 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:03:27.567349  139391 main.go:141] libmachine: Using API Version  1
	I0819 12:03:27.567384  139391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:03:27.567759  139391 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:03:27.567943  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:03:27.606818  139391 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:03:27.608163  139391 start.go:297] selected driver: kvm2
	I0819 12:03:27.608191  139391 start.go:901] validating driver "kvm2" against &{Name:multinode-320821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-320821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.19 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:03:27.608337  139391 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:03:27.608747  139391 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:03:27.608831  139391 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:03:27.625037  139391 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:03:27.625809  139391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:03:27.625854  139391 cni.go:84] Creating CNI manager for ""
	I0819 12:03:27.625862  139391 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 12:03:27.625910  139391 start.go:340] cluster config:
	{Name:multinode-320821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-320821 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.19 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:03:27.626018  139391 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:03:27.627849  139391 out.go:177] * Starting "multinode-320821" primary control-plane node in "multinode-320821" cluster
	I0819 12:03:27.629126  139391 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:03:27.629166  139391 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:03:27.629175  139391 cache.go:56] Caching tarball of preloaded images
	I0819 12:03:27.629254  139391 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:03:27.629266  139391 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:03:27.629374  139391 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/config.json ...
	I0819 12:03:27.629583  139391 start.go:360] acquireMachinesLock for multinode-320821: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:03:27.629625  139391 start.go:364] duration metric: took 23.08µs to acquireMachinesLock for "multinode-320821"
	I0819 12:03:27.629640  139391 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:03:27.629648  139391 fix.go:54] fixHost starting: 
	I0819 12:03:27.629917  139391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:03:27.629950  139391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:03:27.644835  139391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0819 12:03:27.645245  139391 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:03:27.645784  139391 main.go:141] libmachine: Using API Version  1
	I0819 12:03:27.645807  139391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:03:27.646118  139391 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:03:27.646330  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:03:27.646473  139391 main.go:141] libmachine: (multinode-320821) Calling .GetState
	I0819 12:03:27.648176  139391 fix.go:112] recreateIfNeeded on multinode-320821: state=Running err=<nil>
	W0819 12:03:27.648196  139391 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:03:27.649856  139391 out.go:177] * Updating the running kvm2 "multinode-320821" VM ...
	I0819 12:03:27.651100  139391 machine.go:93] provisionDockerMachine start ...
	I0819 12:03:27.651126  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:03:27.651377  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:27.653913  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.654479  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:27.654505  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.654675  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:27.654892  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.655052  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.655197  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:27.655389  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:03:27.655577  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:03:27.655590  139391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:03:27.763957  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-320821
	
	I0819 12:03:27.763983  139391 main.go:141] libmachine: (multinode-320821) Calling .GetMachineName
	I0819 12:03:27.764258  139391 buildroot.go:166] provisioning hostname "multinode-320821"
	I0819 12:03:27.764284  139391 main.go:141] libmachine: (multinode-320821) Calling .GetMachineName
	I0819 12:03:27.764518  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:27.767215  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.767573  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:27.767601  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.767715  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:27.767980  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.768155  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.768282  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:27.768452  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:03:27.768650  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:03:27.768666  139391 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-320821 && echo "multinode-320821" | sudo tee /etc/hostname
	I0819 12:03:27.889999  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-320821
	
	I0819 12:03:27.890029  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:27.892692  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.893003  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:27.893036  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:27.893194  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:27.893389  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.893562  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:27.893692  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:27.893888  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:03:27.894057  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:03:27.894073  139391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-320821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-320821/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-320821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:03:28.000583  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:03:28.000615  139391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:03:28.000660  139391 buildroot.go:174] setting up certificates
	I0819 12:03:28.000670  139391 provision.go:84] configureAuth start
	I0819 12:03:28.000680  139391 main.go:141] libmachine: (multinode-320821) Calling .GetMachineName
	I0819 12:03:28.001062  139391 main.go:141] libmachine: (multinode-320821) Calling .GetIP
	I0819 12:03:28.003574  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.003986  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:28.004015  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.004161  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:28.006399  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.006731  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:28.006777  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.006904  139391 provision.go:143] copyHostCerts
	I0819 12:03:28.006942  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:03:28.006981  139391 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:03:28.007012  139391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:03:28.007096  139391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:03:28.007190  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:03:28.007213  139391 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:03:28.007222  139391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:03:28.007258  139391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:03:28.007321  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:03:28.007344  139391 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:03:28.007352  139391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:03:28.007386  139391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:03:28.007451  139391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.multinode-320821 san=[127.0.0.1 192.168.39.88 localhost minikube multinode-320821]
	I0819 12:03:28.577557  139391 provision.go:177] copyRemoteCerts
	I0819 12:03:28.577618  139391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:03:28.577643  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:28.580552  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.580908  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:28.580946  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.581095  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:28.581333  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:28.581509  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:28.581668  139391 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:03:28.667128  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 12:03:28.667212  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:03:28.695545  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 12:03:28.695631  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 12:03:28.721907  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 12:03:28.721985  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:03:28.747207  139391 provision.go:87] duration metric: took 746.522403ms to configureAuth
	I0819 12:03:28.747237  139391 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:03:28.747501  139391 config.go:182] Loaded profile config "multinode-320821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:03:28.747603  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:03:28.750302  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.750693  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:03:28.750729  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:03:28.750948  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:03:28.751165  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:28.751335  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:03:28.751499  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:03:28.751682  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:03:28.751880  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:03:28.751895  139391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:04:59.515518  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:04:59.515580  139391 machine.go:96] duration metric: took 1m31.864455758s to provisionDockerMachine
	I0819 12:04:59.515602  139391 start.go:293] postStartSetup for "multinode-320821" (driver="kvm2")
	I0819 12:04:59.515618  139391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:04:59.515645  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.516012  139391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:04:59.516044  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:04:59.519381  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.519856  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.519884  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.520090  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:04:59.520309  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.520474  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:04:59.520596  139391 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:04:59.602664  139391 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:04:59.606735  139391 command_runner.go:130] > NAME=Buildroot
	I0819 12:04:59.606765  139391 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 12:04:59.606770  139391 command_runner.go:130] > ID=buildroot
	I0819 12:04:59.606775  139391 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 12:04:59.606780  139391 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 12:04:59.607093  139391 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:04:59.607125  139391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:04:59.607204  139391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:04:59.607293  139391 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:04:59.607309  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /etc/ssl/certs/1066322.pem
	I0819 12:04:59.607421  139391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:04:59.616930  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:04:59.640345  139391 start.go:296] duration metric: took 124.725493ms for postStartSetup
	I0819 12:04:59.640395  139391 fix.go:56] duration metric: took 1m32.010746949s for fixHost
	I0819 12:04:59.640421  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:04:59.643353  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.643896  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.643939  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.644114  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:04:59.644334  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.644527  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.644669  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:04:59.644818  139391 main.go:141] libmachine: Using SSH client type: native
	I0819 12:04:59.645022  139391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0819 12:04:59.645034  139391 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:04:59.748480  139391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069099.724700385
	
	I0819 12:04:59.748510  139391 fix.go:216] guest clock: 1724069099.724700385
	I0819 12:04:59.748522  139391 fix.go:229] Guest: 2024-08-19 12:04:59.724700385 +0000 UTC Remote: 2024-08-19 12:04:59.640401835 +0000 UTC m=+92.144249650 (delta=84.29855ms)
	I0819 12:04:59.748556  139391 fix.go:200] guest clock delta is within tolerance: 84.29855ms
	I0819 12:04:59.748563  139391 start.go:83] releasing machines lock for "multinode-320821", held for 1m32.118928184s
	I0819 12:04:59.748591  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.748908  139391 main.go:141] libmachine: (multinode-320821) Calling .GetIP
	I0819 12:04:59.751690  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.752117  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.752153  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.752284  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.752814  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.753001  139391 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:04:59.753071  139391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:04:59.753136  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:04:59.753257  139391 ssh_runner.go:195] Run: cat /version.json
	I0819 12:04:59.753279  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:04:59.755679  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.756035  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.756065  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.756088  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.756213  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:04:59.756391  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.756454  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:04:59.756481  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:04:59.756551  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:04:59.756666  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:04:59.756745  139391 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:04:59.756851  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:04:59.756977  139391 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:04:59.757113  139391 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:04:59.841194  139391 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 12:04:59.841364  139391 ssh_runner.go:195] Run: systemctl --version
	I0819 12:04:59.864130  139391 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 12:04:59.865028  139391 command_runner.go:130] > systemd 252 (252)
	I0819 12:04:59.865061  139391 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 12:04:59.865114  139391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:05:00.016532  139391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 12:05:00.024850  139391 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 12:05:00.024925  139391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:05:00.025010  139391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:05:00.034121  139391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:05:00.034160  139391 start.go:495] detecting cgroup driver to use...
	I0819 12:05:00.034243  139391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:05:00.051507  139391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:05:00.065018  139391 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:05:00.065101  139391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:05:00.078730  139391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:05:00.092719  139391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:05:00.227017  139391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:05:00.362161  139391 docker.go:233] disabling docker service ...
	I0819 12:05:00.362228  139391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:05:00.379630  139391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:05:00.393082  139391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:05:00.531254  139391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:05:00.672253  139391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:05:00.688645  139391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:05:00.707575  139391 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 12:05:00.707636  139391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:05:00.707683  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.718359  139391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:05:00.718438  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.729156  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.739812  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.750337  139391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:05:00.761419  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.772023  139391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.783220  139391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:05:00.793819  139391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:05:00.806113  139391 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 12:05:00.806284  139391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:05:00.826838  139391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:05:00.972814  139391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:05:11.010061  139391 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.037202005s)
	I0819 12:05:11.010102  139391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:05:11.010200  139391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:05:11.015047  139391 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 12:05:11.015077  139391 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 12:05:11.015088  139391 command_runner.go:130] > Device: 0,22	Inode: 1349        Links: 1
	I0819 12:05:11.015097  139391 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 12:05:11.015102  139391 command_runner.go:130] > Access: 2024-08-19 12:05:10.880618007 +0000
	I0819 12:05:11.015108  139391 command_runner.go:130] > Modify: 2024-08-19 12:05:10.880618007 +0000
	I0819 12:05:11.015113  139391 command_runner.go:130] > Change: 2024-08-19 12:05:10.880618007 +0000
	I0819 12:05:11.015117  139391 command_runner.go:130] >  Birth: -
	I0819 12:05:11.015146  139391 start.go:563] Will wait 60s for crictl version
	I0819 12:05:11.015191  139391 ssh_runner.go:195] Run: which crictl
	I0819 12:05:11.019009  139391 command_runner.go:130] > /usr/bin/crictl
	I0819 12:05:11.019090  139391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:05:11.053069  139391 command_runner.go:130] > Version:  0.1.0
	I0819 12:05:11.053107  139391 command_runner.go:130] > RuntimeName:  cri-o
	I0819 12:05:11.053114  139391 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 12:05:11.053123  139391 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 12:05:11.054275  139391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:05:11.054352  139391 ssh_runner.go:195] Run: crio --version
	I0819 12:05:11.085702  139391 command_runner.go:130] > crio version 1.29.1
	I0819 12:05:11.085740  139391 command_runner.go:130] > Version:        1.29.1
	I0819 12:05:11.085749  139391 command_runner.go:130] > GitCommit:      unknown
	I0819 12:05:11.085755  139391 command_runner.go:130] > GitCommitDate:  unknown
	I0819 12:05:11.085760  139391 command_runner.go:130] > GitTreeState:   clean
	I0819 12:05:11.085771  139391 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 12:05:11.085777  139391 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 12:05:11.085783  139391 command_runner.go:130] > Compiler:       gc
	I0819 12:05:11.085789  139391 command_runner.go:130] > Platform:       linux/amd64
	I0819 12:05:11.085805  139391 command_runner.go:130] > Linkmode:       dynamic
	I0819 12:05:11.085820  139391 command_runner.go:130] > BuildTags:      
	I0819 12:05:11.085826  139391 command_runner.go:130] >   containers_image_ostree_stub
	I0819 12:05:11.085831  139391 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 12:05:11.085835  139391 command_runner.go:130] >   btrfs_noversion
	I0819 12:05:11.085840  139391 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 12:05:11.085844  139391 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 12:05:11.085847  139391 command_runner.go:130] >   seccomp
	I0819 12:05:11.085852  139391 command_runner.go:130] > LDFlags:          unknown
	I0819 12:05:11.085856  139391 command_runner.go:130] > SeccompEnabled:   true
	I0819 12:05:11.085860  139391 command_runner.go:130] > AppArmorEnabled:  false
	I0819 12:05:11.085950  139391 ssh_runner.go:195] Run: crio --version
	I0819 12:05:11.117992  139391 command_runner.go:130] > crio version 1.29.1
	I0819 12:05:11.118022  139391 command_runner.go:130] > Version:        1.29.1
	I0819 12:05:11.118031  139391 command_runner.go:130] > GitCommit:      unknown
	I0819 12:05:11.118037  139391 command_runner.go:130] > GitCommitDate:  unknown
	I0819 12:05:11.118043  139391 command_runner.go:130] > GitTreeState:   clean
	I0819 12:05:11.118051  139391 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 12:05:11.118057  139391 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 12:05:11.118062  139391 command_runner.go:130] > Compiler:       gc
	I0819 12:05:11.118066  139391 command_runner.go:130] > Platform:       linux/amd64
	I0819 12:05:11.118073  139391 command_runner.go:130] > Linkmode:       dynamic
	I0819 12:05:11.118078  139391 command_runner.go:130] > BuildTags:      
	I0819 12:05:11.118090  139391 command_runner.go:130] >   containers_image_ostree_stub
	I0819 12:05:11.118096  139391 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 12:05:11.118102  139391 command_runner.go:130] >   btrfs_noversion
	I0819 12:05:11.118110  139391 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 12:05:11.118117  139391 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 12:05:11.118126  139391 command_runner.go:130] >   seccomp
	I0819 12:05:11.118133  139391 command_runner.go:130] > LDFlags:          unknown
	I0819 12:05:11.118141  139391 command_runner.go:130] > SeccompEnabled:   true
	I0819 12:05:11.118145  139391 command_runner.go:130] > AppArmorEnabled:  false
	I0819 12:05:11.120119  139391 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:05:11.121702  139391 main.go:141] libmachine: (multinode-320821) Calling .GetIP
	I0819 12:05:11.124732  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:05:11.125131  139391 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:05:11.125162  139391 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:05:11.125351  139391 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:05:11.129578  139391 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 12:05:11.129787  139391 kubeadm.go:883] updating cluster {Name:multinode-320821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-320821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.19 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:05:11.129942  139391 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:05:11.130002  139391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:05:11.170433  139391 command_runner.go:130] > {
	I0819 12:05:11.170459  139391 command_runner.go:130] >   "images": [
	I0819 12:05:11.170471  139391 command_runner.go:130] >     {
	I0819 12:05:11.170478  139391 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 12:05:11.170489  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170494  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 12:05:11.170499  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170502  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170512  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 12:05:11.170520  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 12:05:11.170525  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170532  139391 command_runner.go:130] >       "size": "87165492",
	I0819 12:05:11.170537  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.170542  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.170554  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.170561  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.170569  139391 command_runner.go:130] >     },
	I0819 12:05:11.170573  139391 command_runner.go:130] >     {
	I0819 12:05:11.170584  139391 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 12:05:11.170593  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170599  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 12:05:11.170610  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170615  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170621  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 12:05:11.170628  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 12:05:11.170634  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170639  139391 command_runner.go:130] >       "size": "87190579",
	I0819 12:05:11.170645  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.170657  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.170667  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.170680  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.170687  139391 command_runner.go:130] >     },
	I0819 12:05:11.170691  139391 command_runner.go:130] >     {
	I0819 12:05:11.170697  139391 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 12:05:11.170704  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170710  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 12:05:11.170715  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170720  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170730  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 12:05:11.170743  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 12:05:11.170753  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170763  139391 command_runner.go:130] >       "size": "1363676",
	I0819 12:05:11.170773  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.170782  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.170792  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.170799  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.170802  139391 command_runner.go:130] >     },
	I0819 12:05:11.170808  139391 command_runner.go:130] >     {
	I0819 12:05:11.170815  139391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 12:05:11.170821  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170829  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 12:05:11.170837  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170847  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170863  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 12:05:11.170882  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 12:05:11.170890  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170894  139391 command_runner.go:130] >       "size": "31470524",
	I0819 12:05:11.170899  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.170903  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.170909  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.170916  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.170924  139391 command_runner.go:130] >     },
	I0819 12:05:11.170933  139391 command_runner.go:130] >     {
	I0819 12:05:11.170946  139391 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 12:05:11.170955  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.170966  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 12:05:11.170974  139391 command_runner.go:130] >       ],
	I0819 12:05:11.170983  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.170993  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 12:05:11.171007  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 12:05:11.171016  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171023  139391 command_runner.go:130] >       "size": "61245718",
	I0819 12:05:11.171033  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.171044  139391 command_runner.go:130] >       "username": "nonroot",
	I0819 12:05:11.171055  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171064  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171073  139391 command_runner.go:130] >     },
	I0819 12:05:11.171081  139391 command_runner.go:130] >     {
	I0819 12:05:11.171090  139391 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 12:05:11.171097  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171105  139391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 12:05:11.171114  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171121  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171135  139391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 12:05:11.171149  139391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 12:05:11.171157  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171167  139391 command_runner.go:130] >       "size": "149009664",
	I0819 12:05:11.171176  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171185  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.171191  139391 command_runner.go:130] >       },
	I0819 12:05:11.171196  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171205  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171212  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171222  139391 command_runner.go:130] >     },
	I0819 12:05:11.171231  139391 command_runner.go:130] >     {
	I0819 12:05:11.171243  139391 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 12:05:11.171253  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171263  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 12:05:11.171270  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171276  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171288  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 12:05:11.171302  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 12:05:11.171312  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171321  139391 command_runner.go:130] >       "size": "95233506",
	I0819 12:05:11.171330  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171339  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.171348  139391 command_runner.go:130] >       },
	I0819 12:05:11.171357  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171364  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171369  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171378  139391 command_runner.go:130] >     },
	I0819 12:05:11.171387  139391 command_runner.go:130] >     {
	I0819 12:05:11.171398  139391 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 12:05:11.171409  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171420  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 12:05:11.171429  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171439  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171460  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 12:05:11.171473  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 12:05:11.171486  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171494  139391 command_runner.go:130] >       "size": "89437512",
	I0819 12:05:11.171501  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171510  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.171516  139391 command_runner.go:130] >       },
	I0819 12:05:11.171522  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171528  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171534  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171539  139391 command_runner.go:130] >     },
	I0819 12:05:11.171546  139391 command_runner.go:130] >     {
	I0819 12:05:11.171555  139391 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 12:05:11.171561  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171568  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 12:05:11.171573  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171579  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171603  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 12:05:11.171613  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 12:05:11.171619  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171626  139391 command_runner.go:130] >       "size": "92728217",
	I0819 12:05:11.171633  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.171640  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171646  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171653  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171659  139391 command_runner.go:130] >     },
	I0819 12:05:11.171666  139391 command_runner.go:130] >     {
	I0819 12:05:11.171678  139391 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 12:05:11.171690  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171703  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 12:05:11.171711  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171721  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171747  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 12:05:11.171763  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 12:05:11.171772  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171780  139391 command_runner.go:130] >       "size": "68420936",
	I0819 12:05:11.171789  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171796  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.171803  139391 command_runner.go:130] >       },
	I0819 12:05:11.171812  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171817  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171823  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.171831  139391 command_runner.go:130] >     },
	I0819 12:05:11.171837  139391 command_runner.go:130] >     {
	I0819 12:05:11.171850  139391 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 12:05:11.171859  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.171869  139391 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 12:05:11.171877  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171885  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.171898  139391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 12:05:11.171908  139391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 12:05:11.171917  139391 command_runner.go:130] >       ],
	I0819 12:05:11.171927  139391 command_runner.go:130] >       "size": "742080",
	I0819 12:05:11.171932  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.171944  139391 command_runner.go:130] >         "value": "65535"
	I0819 12:05:11.171953  139391 command_runner.go:130] >       },
	I0819 12:05:11.171962  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.171971  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.171980  139391 command_runner.go:130] >       "pinned": true
	I0819 12:05:11.171989  139391 command_runner.go:130] >     }
	I0819 12:05:11.171996  139391 command_runner.go:130] >   ]
	I0819 12:05:11.171999  139391 command_runner.go:130] > }
	I0819 12:05:11.172215  139391 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:05:11.172227  139391 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:05:11.172288  139391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:05:11.207439  139391 command_runner.go:130] > {
	I0819 12:05:11.207479  139391 command_runner.go:130] >   "images": [
	I0819 12:05:11.207484  139391 command_runner.go:130] >     {
	I0819 12:05:11.207492  139391 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 12:05:11.207497  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207503  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 12:05:11.207507  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207511  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207520  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 12:05:11.207528  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 12:05:11.207532  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207536  139391 command_runner.go:130] >       "size": "87165492",
	I0819 12:05:11.207540  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207544  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207552  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207556  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207560  139391 command_runner.go:130] >     },
	I0819 12:05:11.207564  139391 command_runner.go:130] >     {
	I0819 12:05:11.207569  139391 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 12:05:11.207573  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207578  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 12:05:11.207582  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207585  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207592  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 12:05:11.207599  139391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 12:05:11.207602  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207606  139391 command_runner.go:130] >       "size": "87190579",
	I0819 12:05:11.207613  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207622  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207626  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207631  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207637  139391 command_runner.go:130] >     },
	I0819 12:05:11.207640  139391 command_runner.go:130] >     {
	I0819 12:05:11.207645  139391 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 12:05:11.207649  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207654  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 12:05:11.207660  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207664  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207671  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 12:05:11.207680  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 12:05:11.207683  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207687  139391 command_runner.go:130] >       "size": "1363676",
	I0819 12:05:11.207691  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207695  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207709  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207717  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207739  139391 command_runner.go:130] >     },
	I0819 12:05:11.207743  139391 command_runner.go:130] >     {
	I0819 12:05:11.207749  139391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 12:05:11.207753  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207758  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 12:05:11.207762  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207766  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207775  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 12:05:11.207786  139391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 12:05:11.207790  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207794  139391 command_runner.go:130] >       "size": "31470524",
	I0819 12:05:11.207798  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207802  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207806  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207810  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207814  139391 command_runner.go:130] >     },
	I0819 12:05:11.207817  139391 command_runner.go:130] >     {
	I0819 12:05:11.207823  139391 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 12:05:11.207830  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207834  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 12:05:11.207838  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207843  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207851  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 12:05:11.207860  139391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 12:05:11.207864  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207867  139391 command_runner.go:130] >       "size": "61245718",
	I0819 12:05:11.207871  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.207877  139391 command_runner.go:130] >       "username": "nonroot",
	I0819 12:05:11.207881  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207885  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207888  139391 command_runner.go:130] >     },
	I0819 12:05:11.207891  139391 command_runner.go:130] >     {
	I0819 12:05:11.207897  139391 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 12:05:11.207903  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207908  139391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 12:05:11.207912  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207917  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.207926  139391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 12:05:11.207934  139391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 12:05:11.207940  139391 command_runner.go:130] >       ],
	I0819 12:05:11.207943  139391 command_runner.go:130] >       "size": "149009664",
	I0819 12:05:11.207949  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.207953  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.207959  139391 command_runner.go:130] >       },
	I0819 12:05:11.207964  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.207967  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.207971  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.207975  139391 command_runner.go:130] >     },
	I0819 12:05:11.207979  139391 command_runner.go:130] >     {
	I0819 12:05:11.207987  139391 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 12:05:11.207991  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.207996  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 12:05:11.208002  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208006  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208013  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 12:05:11.208022  139391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 12:05:11.208026  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208031  139391 command_runner.go:130] >       "size": "95233506",
	I0819 12:05:11.208037  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.208042  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.208047  139391 command_runner.go:130] >       },
	I0819 12:05:11.208051  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208054  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208059  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.208062  139391 command_runner.go:130] >     },
	I0819 12:05:11.208066  139391 command_runner.go:130] >     {
	I0819 12:05:11.208072  139391 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 12:05:11.208076  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.208082  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 12:05:11.208087  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208091  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208105  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 12:05:11.208115  139391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 12:05:11.208121  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208125  139391 command_runner.go:130] >       "size": "89437512",
	I0819 12:05:11.208130  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.208136  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.208140  139391 command_runner.go:130] >       },
	I0819 12:05:11.208144  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208148  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208152  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.208156  139391 command_runner.go:130] >     },
	I0819 12:05:11.208159  139391 command_runner.go:130] >     {
	I0819 12:05:11.208165  139391 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 12:05:11.208171  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.208176  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 12:05:11.208180  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208184  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208192  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 12:05:11.208204  139391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 12:05:11.208210  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208214  139391 command_runner.go:130] >       "size": "92728217",
	I0819 12:05:11.208218  139391 command_runner.go:130] >       "uid": null,
	I0819 12:05:11.208223  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208227  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208231  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.208234  139391 command_runner.go:130] >     },
	I0819 12:05:11.208238  139391 command_runner.go:130] >     {
	I0819 12:05:11.208244  139391 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 12:05:11.208250  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.208256  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 12:05:11.208262  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208268  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208281  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 12:05:11.208293  139391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 12:05:11.208301  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208308  139391 command_runner.go:130] >       "size": "68420936",
	I0819 12:05:11.208317  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.208322  139391 command_runner.go:130] >         "value": "0"
	I0819 12:05:11.208326  139391 command_runner.go:130] >       },
	I0819 12:05:11.208330  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208335  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208342  139391 command_runner.go:130] >       "pinned": false
	I0819 12:05:11.208345  139391 command_runner.go:130] >     },
	I0819 12:05:11.208350  139391 command_runner.go:130] >     {
	I0819 12:05:11.208357  139391 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 12:05:11.208363  139391 command_runner.go:130] >       "repoTags": [
	I0819 12:05:11.208368  139391 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 12:05:11.208375  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208381  139391 command_runner.go:130] >       "repoDigests": [
	I0819 12:05:11.208395  139391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 12:05:11.208410  139391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 12:05:11.208418  139391 command_runner.go:130] >       ],
	I0819 12:05:11.208422  139391 command_runner.go:130] >       "size": "742080",
	I0819 12:05:11.208426  139391 command_runner.go:130] >       "uid": {
	I0819 12:05:11.208429  139391 command_runner.go:130] >         "value": "65535"
	I0819 12:05:11.208433  139391 command_runner.go:130] >       },
	I0819 12:05:11.208438  139391 command_runner.go:130] >       "username": "",
	I0819 12:05:11.208448  139391 command_runner.go:130] >       "spec": null,
	I0819 12:05:11.208456  139391 command_runner.go:130] >       "pinned": true
	I0819 12:05:11.208464  139391 command_runner.go:130] >     }
	I0819 12:05:11.208477  139391 command_runner.go:130] >   ]
	I0819 12:05:11.208483  139391 command_runner.go:130] > }
	I0819 12:05:11.208673  139391 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:05:11.208699  139391 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:05:11.208709  139391 kubeadm.go:934] updating node { 192.168.39.88 8443 v1.31.0 crio true true} ...
	I0819 12:05:11.208819  139391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-320821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-320821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:05:11.208910  139391 ssh_runner.go:195] Run: crio config
	I0819 12:05:11.239968  139391 command_runner.go:130] ! time="2024-08-19 12:05:11.215325817Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 12:05:11.246394  139391 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 12:05:11.252046  139391 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 12:05:11.252071  139391 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 12:05:11.252077  139391 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 12:05:11.252081  139391 command_runner.go:130] > #
	I0819 12:05:11.252088  139391 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 12:05:11.252094  139391 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 12:05:11.252099  139391 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 12:05:11.252108  139391 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 12:05:11.252112  139391 command_runner.go:130] > # reload'.
	I0819 12:05:11.252118  139391 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 12:05:11.252124  139391 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 12:05:11.252133  139391 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 12:05:11.252139  139391 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 12:05:11.252143  139391 command_runner.go:130] > [crio]
	I0819 12:05:11.252149  139391 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 12:05:11.252155  139391 command_runner.go:130] > # containers images, in this directory.
	I0819 12:05:11.252159  139391 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 12:05:11.252167  139391 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 12:05:11.252171  139391 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 12:05:11.252179  139391 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 12:05:11.252184  139391 command_runner.go:130] > # imagestore = ""
	I0819 12:05:11.252190  139391 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 12:05:11.252200  139391 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 12:05:11.252206  139391 command_runner.go:130] > storage_driver = "overlay"
	I0819 12:05:11.252217  139391 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 12:05:11.252226  139391 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 12:05:11.252251  139391 command_runner.go:130] > storage_option = [
	I0819 12:05:11.252261  139391 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 12:05:11.252267  139391 command_runner.go:130] > ]
	I0819 12:05:11.252276  139391 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 12:05:11.252286  139391 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 12:05:11.252295  139391 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 12:05:11.252300  139391 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 12:05:11.252306  139391 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 12:05:11.252311  139391 command_runner.go:130] > # always happen on a node reboot
	I0819 12:05:11.252316  139391 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 12:05:11.252327  139391 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 12:05:11.252335  139391 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 12:05:11.252340  139391 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 12:05:11.252346  139391 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 12:05:11.252354  139391 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 12:05:11.252364  139391 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 12:05:11.252367  139391 command_runner.go:130] > # internal_wipe = true
	I0819 12:05:11.252378  139391 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 12:05:11.252389  139391 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 12:05:11.252396  139391 command_runner.go:130] > # internal_repair = false
	I0819 12:05:11.252408  139391 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 12:05:11.252419  139391 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 12:05:11.252427  139391 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 12:05:11.252432  139391 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 12:05:11.252443  139391 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 12:05:11.252449  139391 command_runner.go:130] > [crio.api]
	I0819 12:05:11.252454  139391 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 12:05:11.252461  139391 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 12:05:11.252466  139391 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 12:05:11.252478  139391 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 12:05:11.252492  139391 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 12:05:11.252502  139391 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 12:05:11.252511  139391 command_runner.go:130] > # stream_port = "0"
	I0819 12:05:11.252522  139391 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 12:05:11.252531  139391 command_runner.go:130] > # stream_enable_tls = false
	I0819 12:05:11.252542  139391 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 12:05:11.252548  139391 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 12:05:11.252558  139391 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 12:05:11.252566  139391 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 12:05:11.252574  139391 command_runner.go:130] > # minutes.
	I0819 12:05:11.252584  139391 command_runner.go:130] > # stream_tls_cert = ""
	I0819 12:05:11.252597  139391 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 12:05:11.252607  139391 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 12:05:11.252617  139391 command_runner.go:130] > # stream_tls_key = ""
	I0819 12:05:11.252627  139391 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 12:05:11.252639  139391 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 12:05:11.252658  139391 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 12:05:11.252664  139391 command_runner.go:130] > # stream_tls_ca = ""
	I0819 12:05:11.252672  139391 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 12:05:11.252682  139391 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 12:05:11.252696  139391 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 12:05:11.252706  139391 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 12:05:11.252717  139391 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 12:05:11.252728  139391 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 12:05:11.252737  139391 command_runner.go:130] > [crio.runtime]
	I0819 12:05:11.252747  139391 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 12:05:11.252760  139391 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 12:05:11.252767  139391 command_runner.go:130] > # "nofile=1024:2048"
	I0819 12:05:11.252776  139391 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 12:05:11.252790  139391 command_runner.go:130] > # default_ulimits = [
	I0819 12:05:11.252798  139391 command_runner.go:130] > # ]
	I0819 12:05:11.252808  139391 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 12:05:11.252818  139391 command_runner.go:130] > # no_pivot = false
	I0819 12:05:11.252830  139391 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 12:05:11.252842  139391 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 12:05:11.252853  139391 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 12:05:11.252866  139391 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 12:05:11.252875  139391 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 12:05:11.252883  139391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 12:05:11.252893  139391 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 12:05:11.252906  139391 command_runner.go:130] > # Cgroup setting for conmon
	I0819 12:05:11.252920  139391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 12:05:11.252929  139391 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 12:05:11.252942  139391 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 12:05:11.252952  139391 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 12:05:11.252969  139391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 12:05:11.252976  139391 command_runner.go:130] > conmon_env = [
	I0819 12:05:11.252983  139391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 12:05:11.252992  139391 command_runner.go:130] > ]
	I0819 12:05:11.253003  139391 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 12:05:11.253014  139391 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 12:05:11.253027  139391 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 12:05:11.253035  139391 command_runner.go:130] > # default_env = [
	I0819 12:05:11.253043  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253056  139391 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 12:05:11.253070  139391 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 12:05:11.253076  139391 command_runner.go:130] > # selinux = false
	I0819 12:05:11.253086  139391 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 12:05:11.253099  139391 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 12:05:11.253111  139391 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 12:05:11.253120  139391 command_runner.go:130] > # seccomp_profile = ""
	I0819 12:05:11.253132  139391 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 12:05:11.253143  139391 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 12:05:11.253155  139391 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 12:05:11.253162  139391 command_runner.go:130] > # which might increase security.
	I0819 12:05:11.253168  139391 command_runner.go:130] > # This option is currently deprecated,
	I0819 12:05:11.253180  139391 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 12:05:11.253191  139391 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 12:05:11.253202  139391 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 12:05:11.253215  139391 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 12:05:11.253228  139391 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 12:05:11.253241  139391 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 12:05:11.253252  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.253261  139391 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 12:05:11.253271  139391 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 12:05:11.253281  139391 command_runner.go:130] > # the cgroup blockio controller.
	I0819 12:05:11.253292  139391 command_runner.go:130] > # blockio_config_file = ""
	I0819 12:05:11.253306  139391 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 12:05:11.253317  139391 command_runner.go:130] > # blockio parameters.
	I0819 12:05:11.253326  139391 command_runner.go:130] > # blockio_reload = false
	I0819 12:05:11.253338  139391 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 12:05:11.253348  139391 command_runner.go:130] > # irqbalance daemon.
	I0819 12:05:11.253359  139391 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 12:05:11.253371  139391 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 12:05:11.253385  139391 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 12:05:11.253399  139391 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 12:05:11.253412  139391 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 12:05:11.253427  139391 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 12:05:11.253437  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.253448  139391 command_runner.go:130] > # rdt_config_file = ""
	I0819 12:05:11.253459  139391 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 12:05:11.253466  139391 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 12:05:11.253489  139391 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 12:05:11.253500  139391 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 12:05:11.253510  139391 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 12:05:11.253523  139391 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 12:05:11.253532  139391 command_runner.go:130] > # will be added.
	I0819 12:05:11.253541  139391 command_runner.go:130] > # default_capabilities = [
	I0819 12:05:11.253549  139391 command_runner.go:130] > # 	"CHOWN",
	I0819 12:05:11.253558  139391 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 12:05:11.253567  139391 command_runner.go:130] > # 	"FSETID",
	I0819 12:05:11.253574  139391 command_runner.go:130] > # 	"FOWNER",
	I0819 12:05:11.253577  139391 command_runner.go:130] > # 	"SETGID",
	I0819 12:05:11.253585  139391 command_runner.go:130] > # 	"SETUID",
	I0819 12:05:11.253594  139391 command_runner.go:130] > # 	"SETPCAP",
	I0819 12:05:11.253601  139391 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 12:05:11.253609  139391 command_runner.go:130] > # 	"KILL",
	I0819 12:05:11.253615  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253629  139391 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 12:05:11.253642  139391 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 12:05:11.253653  139391 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 12:05:11.253662  139391 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 12:05:11.253672  139391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 12:05:11.253677  139391 command_runner.go:130] > default_sysctls = [
	I0819 12:05:11.253687  139391 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 12:05:11.253696  139391 command_runner.go:130] > ]
	I0819 12:05:11.253703  139391 command_runner.go:130] > # List of devices on the host that a
	I0819 12:05:11.253721  139391 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 12:05:11.253733  139391 command_runner.go:130] > # allowed_devices = [
	I0819 12:05:11.253742  139391 command_runner.go:130] > # 	"/dev/fuse",
	I0819 12:05:11.253749  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253759  139391 command_runner.go:130] > # List of additional devices. specified as
	I0819 12:05:11.253771  139391 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 12:05:11.253780  139391 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 12:05:11.253799  139391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 12:05:11.253809  139391 command_runner.go:130] > # additional_devices = [
	I0819 12:05:11.253815  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253826  139391 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 12:05:11.253835  139391 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 12:05:11.253847  139391 command_runner.go:130] > # 	"/etc/cdi",
	I0819 12:05:11.253857  139391 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 12:05:11.253865  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253877  139391 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 12:05:11.253886  139391 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 12:05:11.253895  139391 command_runner.go:130] > # Defaults to false.
	I0819 12:05:11.253906  139391 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 12:05:11.253918  139391 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 12:05:11.253930  139391 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 12:05:11.253940  139391 command_runner.go:130] > # hooks_dir = [
	I0819 12:05:11.253950  139391 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 12:05:11.253958  139391 command_runner.go:130] > # ]
	I0819 12:05:11.253968  139391 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 12:05:11.253977  139391 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 12:05:11.253988  139391 command_runner.go:130] > # its default mounts from the following two files:
	I0819 12:05:11.253997  139391 command_runner.go:130] > #
	I0819 12:05:11.254007  139391 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 12:05:11.254021  139391 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 12:05:11.254033  139391 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 12:05:11.254041  139391 command_runner.go:130] > #
	I0819 12:05:11.254053  139391 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 12:05:11.254066  139391 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 12:05:11.254075  139391 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 12:05:11.254085  139391 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 12:05:11.254093  139391 command_runner.go:130] > #
	I0819 12:05:11.254104  139391 command_runner.go:130] > # default_mounts_file = ""
	I0819 12:05:11.254115  139391 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 12:05:11.254128  139391 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 12:05:11.254137  139391 command_runner.go:130] > pids_limit = 1024
	I0819 12:05:11.254150  139391 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 12:05:11.254159  139391 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 12:05:11.254170  139391 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 12:05:11.254186  139391 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 12:05:11.254196  139391 command_runner.go:130] > # log_size_max = -1
	I0819 12:05:11.254210  139391 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 12:05:11.254223  139391 command_runner.go:130] > # log_to_journald = false
	I0819 12:05:11.254235  139391 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 12:05:11.254244  139391 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 12:05:11.254252  139391 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 12:05:11.254262  139391 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 12:05:11.254274  139391 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 12:05:11.254284  139391 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 12:05:11.254296  139391 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 12:05:11.254306  139391 command_runner.go:130] > # read_only = false
	I0819 12:05:11.254318  139391 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 12:05:11.254330  139391 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 12:05:11.254339  139391 command_runner.go:130] > # live configuration reload.
	I0819 12:05:11.254345  139391 command_runner.go:130] > # log_level = "info"
	I0819 12:05:11.254353  139391 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 12:05:11.254364  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.254374  139391 command_runner.go:130] > # log_filter = ""
	I0819 12:05:11.254384  139391 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 12:05:11.254399  139391 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 12:05:11.254409  139391 command_runner.go:130] > # separated by comma.
	I0819 12:05:11.254423  139391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:05:11.254434  139391 command_runner.go:130] > # uid_mappings = ""
	I0819 12:05:11.254445  139391 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 12:05:11.254454  139391 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 12:05:11.254463  139391 command_runner.go:130] > # separated by comma.
	I0819 12:05:11.254478  139391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:05:11.254488  139391 command_runner.go:130] > # gid_mappings = ""
	I0819 12:05:11.254498  139391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 12:05:11.254510  139391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 12:05:11.254523  139391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 12:05:11.254537  139391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:05:11.254545  139391 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 12:05:11.254553  139391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 12:05:11.254565  139391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 12:05:11.254578  139391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 12:05:11.254594  139391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 12:05:11.254607  139391 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 12:05:11.254619  139391 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 12:05:11.254631  139391 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 12:05:11.254643  139391 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 12:05:11.254649  139391 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 12:05:11.254656  139391 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 12:05:11.254669  139391 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 12:05:11.254680  139391 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 12:05:11.254688  139391 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 12:05:11.254697  139391 command_runner.go:130] > drop_infra_ctr = false
	I0819 12:05:11.254710  139391 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 12:05:11.254721  139391 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 12:05:11.254735  139391 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 12:05:11.254744  139391 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 12:05:11.254755  139391 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 12:05:11.254766  139391 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 12:05:11.254779  139391 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 12:05:11.254794  139391 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 12:05:11.254803  139391 command_runner.go:130] > # shared_cpuset = ""
	I0819 12:05:11.254816  139391 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 12:05:11.254826  139391 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 12:05:11.254837  139391 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 12:05:11.254847  139391 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 12:05:11.254856  139391 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 12:05:11.254868  139391 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 12:05:11.254881  139391 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 12:05:11.254890  139391 command_runner.go:130] > # enable_criu_support = false
	I0819 12:05:11.254902  139391 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 12:05:11.254914  139391 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 12:05:11.254923  139391 command_runner.go:130] > # enable_pod_events = false
	I0819 12:05:11.254933  139391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 12:05:11.254945  139391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 12:05:11.254957  139391 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 12:05:11.254968  139391 command_runner.go:130] > # default_runtime = "runc"
	I0819 12:05:11.254980  139391 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 12:05:11.254994  139391 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 12:05:11.255009  139391 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 12:05:11.255022  139391 command_runner.go:130] > # creation as a file is not desired either.
	I0819 12:05:11.255035  139391 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 12:05:11.255047  139391 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 12:05:11.255057  139391 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 12:05:11.255063  139391 command_runner.go:130] > # ]
	I0819 12:05:11.255076  139391 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 12:05:11.255089  139391 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 12:05:11.255101  139391 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 12:05:11.255112  139391 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 12:05:11.255119  139391 command_runner.go:130] > #
	I0819 12:05:11.255126  139391 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 12:05:11.255134  139391 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 12:05:11.255195  139391 command_runner.go:130] > # runtime_type = "oci"
	I0819 12:05:11.255212  139391 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 12:05:11.255217  139391 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 12:05:11.255223  139391 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 12:05:11.255233  139391 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 12:05:11.255243  139391 command_runner.go:130] > # monitor_env = []
	I0819 12:05:11.255254  139391 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 12:05:11.255264  139391 command_runner.go:130] > # allowed_annotations = []
	I0819 12:05:11.255277  139391 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 12:05:11.255285  139391 command_runner.go:130] > # Where:
	I0819 12:05:11.255296  139391 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 12:05:11.255307  139391 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 12:05:11.255320  139391 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 12:05:11.255333  139391 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 12:05:11.255343  139391 command_runner.go:130] > #   in $PATH.
	I0819 12:05:11.255356  139391 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 12:05:11.255367  139391 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 12:05:11.255380  139391 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 12:05:11.255388  139391 command_runner.go:130] > #   state.
	I0819 12:05:11.255399  139391 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 12:05:11.255409  139391 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 12:05:11.255422  139391 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 12:05:11.255434  139391 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 12:05:11.255448  139391 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 12:05:11.255461  139391 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 12:05:11.255479  139391 command_runner.go:130] > #   The currently recognized values are:
	I0819 12:05:11.255492  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 12:05:11.255502  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 12:05:11.255512  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 12:05:11.255524  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 12:05:11.255539  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 12:05:11.255552  139391 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 12:05:11.255566  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 12:05:11.255579  139391 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 12:05:11.255592  139391 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 12:05:11.255603  139391 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 12:05:11.255611  139391 command_runner.go:130] > #   deprecated option "conmon".
	I0819 12:05:11.255621  139391 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 12:05:11.255633  139391 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 12:05:11.255643  139391 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 12:05:11.255655  139391 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 12:05:11.255668  139391 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 12:05:11.255679  139391 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 12:05:11.255689  139391 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 12:05:11.255700  139391 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 12:05:11.255706  139391 command_runner.go:130] > #
	I0819 12:05:11.255712  139391 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 12:05:11.255719  139391 command_runner.go:130] > #
	I0819 12:05:11.255744  139391 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 12:05:11.255758  139391 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 12:05:11.255766  139391 command_runner.go:130] > #
	I0819 12:05:11.255778  139391 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 12:05:11.255795  139391 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 12:05:11.255802  139391 command_runner.go:130] > #
	I0819 12:05:11.255809  139391 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 12:05:11.255817  139391 command_runner.go:130] > # feature.
	I0819 12:05:11.255826  139391 command_runner.go:130] > #
	I0819 12:05:11.255835  139391 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 12:05:11.255849  139391 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 12:05:11.255862  139391 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 12:05:11.255877  139391 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 12:05:11.255889  139391 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 12:05:11.255896  139391 command_runner.go:130] > #
	I0819 12:05:11.255902  139391 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 12:05:11.255914  139391 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 12:05:11.255923  139391 command_runner.go:130] > #
	I0819 12:05:11.255934  139391 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 12:05:11.255946  139391 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 12:05:11.255954  139391 command_runner.go:130] > #
	I0819 12:05:11.255967  139391 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 12:05:11.255979  139391 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 12:05:11.255988  139391 command_runner.go:130] > # limitation.
	I0819 12:05:11.255996  139391 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 12:05:11.256005  139391 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 12:05:11.256014  139391 command_runner.go:130] > runtime_type = "oci"
	I0819 12:05:11.256024  139391 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 12:05:11.256035  139391 command_runner.go:130] > runtime_config_path = ""
	I0819 12:05:11.256048  139391 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 12:05:11.256058  139391 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 12:05:11.256068  139391 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 12:05:11.256077  139391 command_runner.go:130] > monitor_env = [
	I0819 12:05:11.256085  139391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 12:05:11.256093  139391 command_runner.go:130] > ]
	I0819 12:05:11.256101  139391 command_runner.go:130] > privileged_without_host_devices = false
	I0819 12:05:11.256114  139391 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 12:05:11.256126  139391 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 12:05:11.256139  139391 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 12:05:11.256154  139391 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 12:05:11.256169  139391 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 12:05:11.256179  139391 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 12:05:11.256193  139391 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 12:05:11.256210  139391 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 12:05:11.256221  139391 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 12:05:11.256232  139391 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 12:05:11.256238  139391 command_runner.go:130] > # Example:
	I0819 12:05:11.256246  139391 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 12:05:11.256254  139391 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 12:05:11.256260  139391 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 12:05:11.256268  139391 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 12:05:11.256272  139391 command_runner.go:130] > # cpuset = 0
	I0819 12:05:11.256278  139391 command_runner.go:130] > # cpushares = "0-1"
	I0819 12:05:11.256283  139391 command_runner.go:130] > # Where:
	I0819 12:05:11.256291  139391 command_runner.go:130] > # The workload name is workload-type.
	I0819 12:05:11.256302  139391 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 12:05:11.256311  139391 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 12:05:11.256320  139391 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 12:05:11.256331  139391 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 12:05:11.256340  139391 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 12:05:11.256347  139391 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 12:05:11.256354  139391 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 12:05:11.256358  139391 command_runner.go:130] > # Default value is set to true
	I0819 12:05:11.256365  139391 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 12:05:11.256374  139391 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 12:05:11.256382  139391 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 12:05:11.256389  139391 command_runner.go:130] > # Default value is set to 'false'
	I0819 12:05:11.256400  139391 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 12:05:11.256413  139391 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 12:05:11.256421  139391 command_runner.go:130] > #
	I0819 12:05:11.256434  139391 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 12:05:11.256442  139391 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 12:05:11.256455  139391 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 12:05:11.256469  139391 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 12:05:11.256481  139391 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 12:05:11.256490  139391 command_runner.go:130] > [crio.image]
	I0819 12:05:11.256500  139391 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 12:05:11.256510  139391 command_runner.go:130] > # default_transport = "docker://"
	I0819 12:05:11.256522  139391 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 12:05:11.256533  139391 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 12:05:11.256541  139391 command_runner.go:130] > # global_auth_file = ""
	I0819 12:05:11.256549  139391 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 12:05:11.256561  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.256573  139391 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 12:05:11.256587  139391 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 12:05:11.256599  139391 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 12:05:11.256610  139391 command_runner.go:130] > # This option supports live configuration reload.
	I0819 12:05:11.256623  139391 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 12:05:11.256634  139391 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 12:05:11.256648  139391 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 12:05:11.256660  139391 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 12:05:11.256671  139391 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 12:05:11.256680  139391 command_runner.go:130] > # pause_command = "/pause"
	I0819 12:05:11.256690  139391 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 12:05:11.256701  139391 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 12:05:11.256710  139391 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 12:05:11.256721  139391 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 12:05:11.256734  139391 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 12:05:11.256747  139391 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 12:05:11.256756  139391 command_runner.go:130] > # pinned_images = [
	I0819 12:05:11.256764  139391 command_runner.go:130] > # ]
	I0819 12:05:11.256777  139391 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 12:05:11.256792  139391 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 12:05:11.256803  139391 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 12:05:11.256818  139391 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 12:05:11.256830  139391 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 12:05:11.256837  139391 command_runner.go:130] > # signature_policy = ""
	I0819 12:05:11.256849  139391 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 12:05:11.256862  139391 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 12:05:11.256875  139391 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 12:05:11.256888  139391 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 12:05:11.256897  139391 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 12:05:11.256905  139391 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 12:05:11.256919  139391 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 12:05:11.256932  139391 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 12:05:11.256942  139391 command_runner.go:130] > # changing them here.
	I0819 12:05:11.256951  139391 command_runner.go:130] > # insecure_registries = [
	I0819 12:05:11.256960  139391 command_runner.go:130] > # ]
	I0819 12:05:11.256972  139391 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 12:05:11.256983  139391 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 12:05:11.256993  139391 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 12:05:11.257002  139391 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 12:05:11.257010  139391 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 12:05:11.257029  139391 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 12:05:11.257039  139391 command_runner.go:130] > # CNI plugins.
	I0819 12:05:11.257045  139391 command_runner.go:130] > [crio.network]
	I0819 12:05:11.257059  139391 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 12:05:11.257070  139391 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 12:05:11.257080  139391 command_runner.go:130] > # cni_default_network = ""
	I0819 12:05:11.257092  139391 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 12:05:11.257101  139391 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 12:05:11.257110  139391 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 12:05:11.257117  139391 command_runner.go:130] > # plugin_dirs = [
	I0819 12:05:11.257124  139391 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 12:05:11.257132  139391 command_runner.go:130] > # ]
	I0819 12:05:11.257142  139391 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 12:05:11.257152  139391 command_runner.go:130] > [crio.metrics]
	I0819 12:05:11.257162  139391 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 12:05:11.257172  139391 command_runner.go:130] > enable_metrics = true
	I0819 12:05:11.257182  139391 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 12:05:11.257194  139391 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 12:05:11.257204  139391 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 12:05:11.257214  139391 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 12:05:11.257226  139391 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 12:05:11.257236  139391 command_runner.go:130] > # metrics_collectors = [
	I0819 12:05:11.257243  139391 command_runner.go:130] > # 	"operations",
	I0819 12:05:11.257253  139391 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 12:05:11.257264  139391 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 12:05:11.257273  139391 command_runner.go:130] > # 	"operations_errors",
	I0819 12:05:11.257283  139391 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 12:05:11.257293  139391 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 12:05:11.257302  139391 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 12:05:11.257309  139391 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 12:05:11.257315  139391 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 12:05:11.257324  139391 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 12:05:11.257334  139391 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 12:05:11.257342  139391 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 12:05:11.257352  139391 command_runner.go:130] > # 	"containers_oom_total",
	I0819 12:05:11.257362  139391 command_runner.go:130] > # 	"containers_oom",
	I0819 12:05:11.257371  139391 command_runner.go:130] > # 	"processes_defunct",
	I0819 12:05:11.257380  139391 command_runner.go:130] > # 	"operations_total",
	I0819 12:05:11.257390  139391 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 12:05:11.257400  139391 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 12:05:11.257407  139391 command_runner.go:130] > # 	"operations_errors_total",
	I0819 12:05:11.257412  139391 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 12:05:11.257422  139391 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 12:05:11.257433  139391 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 12:05:11.257440  139391 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 12:05:11.257455  139391 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 12:05:11.257465  139391 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 12:05:11.257476  139391 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 12:05:11.257486  139391 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 12:05:11.257492  139391 command_runner.go:130] > # ]
	I0819 12:05:11.257501  139391 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 12:05:11.257507  139391 command_runner.go:130] > # metrics_port = 9090
	I0819 12:05:11.257515  139391 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 12:05:11.257527  139391 command_runner.go:130] > # metrics_socket = ""
	I0819 12:05:11.257539  139391 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 12:05:11.257552  139391 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 12:05:11.257566  139391 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 12:05:11.257576  139391 command_runner.go:130] > # certificate on any modification event.
	I0819 12:05:11.257585  139391 command_runner.go:130] > # metrics_cert = ""
	I0819 12:05:11.257591  139391 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 12:05:11.257598  139391 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 12:05:11.257607  139391 command_runner.go:130] > # metrics_key = ""
	I0819 12:05:11.257617  139391 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 12:05:11.257627  139391 command_runner.go:130] > [crio.tracing]
	I0819 12:05:11.257636  139391 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 12:05:11.257645  139391 command_runner.go:130] > # enable_tracing = false
	I0819 12:05:11.257654  139391 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 12:05:11.257664  139391 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 12:05:11.257676  139391 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 12:05:11.257684  139391 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 12:05:11.257691  139391 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 12:05:11.257699  139391 command_runner.go:130] > [crio.nri]
	I0819 12:05:11.257707  139391 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 12:05:11.257717  139391 command_runner.go:130] > # enable_nri = false
	I0819 12:05:11.257727  139391 command_runner.go:130] > # NRI socket to listen on.
	I0819 12:05:11.257739  139391 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 12:05:11.257748  139391 command_runner.go:130] > # NRI plugin directory to use.
	I0819 12:05:11.257758  139391 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 12:05:11.257768  139391 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 12:05:11.257777  139391 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 12:05:11.257793  139391 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 12:05:11.257804  139391 command_runner.go:130] > # nri_disable_connections = false
	I0819 12:05:11.257812  139391 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 12:05:11.257823  139391 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 12:05:11.257834  139391 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 12:05:11.257845  139391 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 12:05:11.257857  139391 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 12:05:11.257866  139391 command_runner.go:130] > [crio.stats]
	I0819 12:05:11.257879  139391 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 12:05:11.257891  139391 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 12:05:11.257902  139391 command_runner.go:130] > # stats_collection_period = 0
	I0819 12:05:11.258051  139391 cni.go:84] Creating CNI manager for ""
	I0819 12:05:11.258063  139391 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 12:05:11.258073  139391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:05:11.258104  139391 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.88 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-320821 NodeName:multinode-320821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:05:11.258260  139391 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-320821"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:05:11.258338  139391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:05:11.268868  139391 command_runner.go:130] > kubeadm
	I0819 12:05:11.268895  139391 command_runner.go:130] > kubectl
	I0819 12:05:11.268901  139391 command_runner.go:130] > kubelet
	I0819 12:05:11.268927  139391 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:05:11.268985  139391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:05:11.278864  139391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0819 12:05:11.296126  139391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:05:11.312836  139391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 12:05:11.330227  139391 ssh_runner.go:195] Run: grep 192.168.39.88	control-plane.minikube.internal$ /etc/hosts
	I0819 12:05:11.334469  139391 command_runner.go:130] > 192.168.39.88	control-plane.minikube.internal
	I0819 12:05:11.334582  139391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:05:11.474281  139391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:05:11.489147  139391 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821 for IP: 192.168.39.88
	I0819 12:05:11.489172  139391 certs.go:194] generating shared ca certs ...
	I0819 12:05:11.489197  139391 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:05:11.489375  139391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:05:11.489428  139391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:05:11.489442  139391 certs.go:256] generating profile certs ...
	I0819 12:05:11.489630  139391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/client.key
	I0819 12:05:11.489716  139391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.key.1a1a7689
	I0819 12:05:11.489759  139391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.key
	I0819 12:05:11.489774  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 12:05:11.489793  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 12:05:11.489810  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 12:05:11.489828  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 12:05:11.489844  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 12:05:11.489862  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 12:05:11.489884  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 12:05:11.489901  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 12:05:11.489974  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:05:11.490012  139391 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:05:11.490022  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:05:11.490055  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:05:11.490087  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:05:11.490115  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:05:11.490166  139391 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:05:11.490199  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.490219  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.490237  139391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem -> /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.491053  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:05:11.515953  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:05:11.540314  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:05:11.564381  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:05:11.589575  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 12:05:11.614661  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:05:11.639121  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:05:11.663016  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/multinode-320821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 12:05:11.687920  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:05:11.712519  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:05:11.736686  139391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:05:11.760798  139391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:05:11.777874  139391 ssh_runner.go:195] Run: openssl version
	I0819 12:05:11.783483  139391 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 12:05:11.783569  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:05:11.794459  139391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.798941  139391 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.798976  139391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.799020  139391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:05:11.804654  139391 command_runner.go:130] > 3ec20f2e
	I0819 12:05:11.804763  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:05:11.814213  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:05:11.825036  139391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.829930  139391 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.829976  139391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.830028  139391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:05:11.836189  139391 command_runner.go:130] > b5213941
	I0819 12:05:11.836280  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:05:11.845975  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:05:11.857080  139391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.863635  139391 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.863682  139391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.863758  139391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:05:11.870616  139391 command_runner.go:130] > 51391683
	I0819 12:05:11.870710  139391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:05:11.906188  139391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:05:11.910981  139391 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:05:11.911015  139391 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 12:05:11.911024  139391 command_runner.go:130] > Device: 253,1	Inode: 1056278     Links: 1
	I0819 12:05:11.911034  139391 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 12:05:11.911050  139391 command_runner.go:130] > Access: 2024-08-19 11:58:21.223937169 +0000
	I0819 12:05:11.911058  139391 command_runner.go:130] > Modify: 2024-08-19 11:58:21.223937169 +0000
	I0819 12:05:11.911067  139391 command_runner.go:130] > Change: 2024-08-19 11:58:21.223937169 +0000
	I0819 12:05:11.911074  139391 command_runner.go:130] >  Birth: 2024-08-19 11:58:21.223937169 +0000
	I0819 12:05:11.911179  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:05:11.920014  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.920163  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:05:11.927193  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.927327  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:05:11.945550  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.945662  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:05:11.977275  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.977372  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:05:11.986351  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.986480  139391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:05:11.997065  139391 command_runner.go:130] > Certificate will not expire
	I0819 12:05:11.997140  139391 kubeadm.go:392] StartCluster: {Name:multinode-320821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-320821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.19 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:05:11.997261  139391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:05:11.997318  139391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:05:12.089865  139391 command_runner.go:130] > 9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d
	I0819 12:05:12.089908  139391 command_runner.go:130] > 8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff
	I0819 12:05:12.089918  139391 command_runner.go:130] > 821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f
	I0819 12:05:12.089929  139391 command_runner.go:130] > 16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f
	I0819 12:05:12.089935  139391 command_runner.go:130] > 457a1c6babc8a40635e0c66b4e681aae9f346f21e70e037eab570467dd84c619
	I0819 12:05:12.089940  139391 command_runner.go:130] > 3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174
	I0819 12:05:12.089946  139391 command_runner.go:130] > e24d6ca23b038638d4aa30410ff1b35fc0bca2d3cdbdf44468e2de01b598f959
	I0819 12:05:12.089953  139391 command_runner.go:130] > 91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994
	I0819 12:05:12.089982  139391 cri.go:89] found id: "9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d"
	I0819 12:05:12.089991  139391 cri.go:89] found id: "8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff"
	I0819 12:05:12.089994  139391 cri.go:89] found id: "821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f"
	I0819 12:05:12.089997  139391 cri.go:89] found id: "16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f"
	I0819 12:05:12.090000  139391 cri.go:89] found id: "457a1c6babc8a40635e0c66b4e681aae9f346f21e70e037eab570467dd84c619"
	I0819 12:05:12.090003  139391 cri.go:89] found id: "3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174"
	I0819 12:05:12.090006  139391 cri.go:89] found id: "e24d6ca23b038638d4aa30410ff1b35fc0bca2d3cdbdf44468e2de01b598f959"
	I0819 12:05:12.090009  139391 cri.go:89] found id: "91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994"
	I0819 12:05:12.090011  139391 cri.go:89] found id: ""
	I0819 12:05:12.090056  139391 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.005287936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069358005266643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ce6f163-7c19-47ef-a52c-c3818524dfa9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.005792121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75343bc9-8d1b-47c3-9719-928c291f14a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.005846600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75343bc9-8d1b-47c3-9719-928c291f14a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.007019963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c65bc9be79f6bb28386aad25d3c518f7993dd4e7fc0ded651a579bd226a21e3,PodSandboxId:2dee2b5f29bd311529135296b68308146071df02ec2cbe69458ed21e02ae3258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069151440580617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7,PodSandboxId:5e7bfdf817d635e8d64547426a53b5ea6f62c74856db067a7f5b1f5844004a2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724069117843916603,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28,PodSandboxId:13db2fe11ae92e7539de4a4ef99499eb398e3f5492aae8fce83ca293264671bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069117905314601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95,PodSandboxId:0ecf0f0a485198582e3e57adc87eace03490d0fd904fbd054f4966e4567abb76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724069117797909095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d3a7ba2d600d4ae49927eb69615fb3656fdd4b7b8646b8ed60706e2846c05e,PodSandboxId:e4d12cf76f6cf80e586bdc567f8aaa5c3a03c20ed0675398a8b5c4ab5f7bcb2d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069117724374251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069114616727025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069114618355937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be,PodSandboxId:6a8470ecef3c4bbe0afbaa09ad097be123c9580ffa611ae3a30e04156892df9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069112154105546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8,PodSandboxId:d910ec1f8211fbe2e23253112e6974521933f0db65d3e3e8470a791a1b7f30f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069112151951349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724069112118662945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724069112101558818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a4487882fdf97510aa26c8a63bf0a9cff43cd1b753731dcd86ef3ee10d9fda,PodSandboxId:71a902acf5f3a3225a11e0745ac28efd74020edc4921a53c46aa2a8fa686f63a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724068784030468860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d,PodSandboxId:7c5eedd9bea6b1c9f8de0fb57cb2d72b913b1d5335fb9fd5f46f7f1acfd1f9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724068731196453193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff,PodSandboxId:8eed3e1a479d2a6233444c1757b87c2826c6efc916138f94b2baf8374e1a65a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724068731176049339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f,PodSandboxId:7d94180b7a43f7458998f458273e52f4df8d577f672f8572d96f9517aa556e78,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724068719429242208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f,PodSandboxId:6c2448e0084ce4bd5016ad12939e878310178d35c704f948d6bcf0fa874c23cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724068716927547819,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174,PodSandboxId:d3dfabfedbc5351307fd792dc8f6be5f3a1e332fb14ac03b247fd5a3ead1a22e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724068705290622607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994,PodSandboxId:732035bad80065dc50e7eb2613259ffa26657cc179f49da4ada12b4d46f16848,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724068705236863470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75343bc9-8d1b-47c3-9719-928c291f14a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.055989989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9baf722-d129-4871-b412-3687c657cdaf name=/runtime.v1.RuntimeService/Version
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.056063059Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9baf722-d129-4871-b412-3687c657cdaf name=/runtime.v1.RuntimeService/Version
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.057404345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c28bd06e-7989-4f91-81d4-68f0eded4cba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.057897435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069358057869791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c28bd06e-7989-4f91-81d4-68f0eded4cba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.058415607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f0b1d90-9b4c-4a36-8ecb-3d233f60daa5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.058474748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f0b1d90-9b4c-4a36-8ecb-3d233f60daa5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.058862839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c65bc9be79f6bb28386aad25d3c518f7993dd4e7fc0ded651a579bd226a21e3,PodSandboxId:2dee2b5f29bd311529135296b68308146071df02ec2cbe69458ed21e02ae3258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069151440580617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7,PodSandboxId:5e7bfdf817d635e8d64547426a53b5ea6f62c74856db067a7f5b1f5844004a2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724069117843916603,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28,PodSandboxId:13db2fe11ae92e7539de4a4ef99499eb398e3f5492aae8fce83ca293264671bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069117905314601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95,PodSandboxId:0ecf0f0a485198582e3e57adc87eace03490d0fd904fbd054f4966e4567abb76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724069117797909095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d3a7ba2d600d4ae49927eb69615fb3656fdd4b7b8646b8ed60706e2846c05e,PodSandboxId:e4d12cf76f6cf80e586bdc567f8aaa5c3a03c20ed0675398a8b5c4ab5f7bcb2d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069117724374251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069114616727025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069114618355937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be,PodSandboxId:6a8470ecef3c4bbe0afbaa09ad097be123c9580ffa611ae3a30e04156892df9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069112154105546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8,PodSandboxId:d910ec1f8211fbe2e23253112e6974521933f0db65d3e3e8470a791a1b7f30f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069112151951349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724069112118662945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724069112101558818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a4487882fdf97510aa26c8a63bf0a9cff43cd1b753731dcd86ef3ee10d9fda,PodSandboxId:71a902acf5f3a3225a11e0745ac28efd74020edc4921a53c46aa2a8fa686f63a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724068784030468860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d,PodSandboxId:7c5eedd9bea6b1c9f8de0fb57cb2d72b913b1d5335fb9fd5f46f7f1acfd1f9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724068731196453193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff,PodSandboxId:8eed3e1a479d2a6233444c1757b87c2826c6efc916138f94b2baf8374e1a65a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724068731176049339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f,PodSandboxId:7d94180b7a43f7458998f458273e52f4df8d577f672f8572d96f9517aa556e78,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724068719429242208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f,PodSandboxId:6c2448e0084ce4bd5016ad12939e878310178d35c704f948d6bcf0fa874c23cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724068716927547819,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174,PodSandboxId:d3dfabfedbc5351307fd792dc8f6be5f3a1e332fb14ac03b247fd5a3ead1a22e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724068705290622607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994,PodSandboxId:732035bad80065dc50e7eb2613259ffa26657cc179f49da4ada12b4d46f16848,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724068705236863470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f0b1d90-9b4c-4a36-8ecb-3d233f60daa5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.098846573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd3d32fe-ab60-499b-bc9c-b9a6f2cd2224 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.098920967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd3d32fe-ab60-499b-bc9c-b9a6f2cd2224 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.099848414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd58babe-a00d-421a-9cc7-9f7ca5d8d407 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.100241397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069358100222076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd58babe-a00d-421a-9cc7-9f7ca5d8d407 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.100937142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f08887cc-8f2a-451c-9782-d89968e151b2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.100991300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f08887cc-8f2a-451c-9782-d89968e151b2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.101324876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c65bc9be79f6bb28386aad25d3c518f7993dd4e7fc0ded651a579bd226a21e3,PodSandboxId:2dee2b5f29bd311529135296b68308146071df02ec2cbe69458ed21e02ae3258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069151440580617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7,PodSandboxId:5e7bfdf817d635e8d64547426a53b5ea6f62c74856db067a7f5b1f5844004a2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724069117843916603,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28,PodSandboxId:13db2fe11ae92e7539de4a4ef99499eb398e3f5492aae8fce83ca293264671bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069117905314601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95,PodSandboxId:0ecf0f0a485198582e3e57adc87eace03490d0fd904fbd054f4966e4567abb76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724069117797909095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d3a7ba2d600d4ae49927eb69615fb3656fdd4b7b8646b8ed60706e2846c05e,PodSandboxId:e4d12cf76f6cf80e586bdc567f8aaa5c3a03c20ed0675398a8b5c4ab5f7bcb2d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069117724374251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069114616727025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069114618355937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be,PodSandboxId:6a8470ecef3c4bbe0afbaa09ad097be123c9580ffa611ae3a30e04156892df9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069112154105546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8,PodSandboxId:d910ec1f8211fbe2e23253112e6974521933f0db65d3e3e8470a791a1b7f30f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069112151951349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724069112118662945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724069112101558818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a4487882fdf97510aa26c8a63bf0a9cff43cd1b753731dcd86ef3ee10d9fda,PodSandboxId:71a902acf5f3a3225a11e0745ac28efd74020edc4921a53c46aa2a8fa686f63a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724068784030468860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d,PodSandboxId:7c5eedd9bea6b1c9f8de0fb57cb2d72b913b1d5335fb9fd5f46f7f1acfd1f9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724068731196453193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff,PodSandboxId:8eed3e1a479d2a6233444c1757b87c2826c6efc916138f94b2baf8374e1a65a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724068731176049339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f,PodSandboxId:7d94180b7a43f7458998f458273e52f4df8d577f672f8572d96f9517aa556e78,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724068719429242208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f,PodSandboxId:6c2448e0084ce4bd5016ad12939e878310178d35c704f948d6bcf0fa874c23cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724068716927547819,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174,PodSandboxId:d3dfabfedbc5351307fd792dc8f6be5f3a1e332fb14ac03b247fd5a3ead1a22e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724068705290622607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994,PodSandboxId:732035bad80065dc50e7eb2613259ffa26657cc179f49da4ada12b4d46f16848,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724068705236863470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f08887cc-8f2a-451c-9782-d89968e151b2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.142809276Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ce7e263-5fa2-4dd7-9fb2-9697788dd552 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.142883571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ce7e263-5fa2-4dd7-9fb2-9697788dd552 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.143744799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9cd1eda-5b80-4f72-91d6-d98501192659 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.144161782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069358144142501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9cd1eda-5b80-4f72-91d6-d98501192659 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.144647622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9295f798-78c9-410b-8f32-2c574ede1beb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.144709475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9295f798-78c9-410b-8f32-2c574ede1beb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:09:18 multinode-320821 crio[2750]: time="2024-08-19 12:09:18.145257360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c65bc9be79f6bb28386aad25d3c518f7993dd4e7fc0ded651a579bd226a21e3,PodSandboxId:2dee2b5f29bd311529135296b68308146071df02ec2cbe69458ed21e02ae3258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724069151440580617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7,PodSandboxId:5e7bfdf817d635e8d64547426a53b5ea6f62c74856db067a7f5b1f5844004a2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724069117843916603,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28,PodSandboxId:13db2fe11ae92e7539de4a4ef99499eb398e3f5492aae8fce83ca293264671bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724069117905314601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95,PodSandboxId:0ecf0f0a485198582e3e57adc87eace03490d0fd904fbd054f4966e4567abb76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724069117797909095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d3a7ba2d600d4ae49927eb69615fb3656fdd4b7b8646b8ed60706e2846c05e,PodSandboxId:e4d12cf76f6cf80e586bdc567f8aaa5c3a03c20ed0675398a8b5c4ab5f7bcb2d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069117724374251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724069114616727025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724069114618355937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be,PodSandboxId:6a8470ecef3c4bbe0afbaa09ad097be123c9580ffa611ae3a30e04156892df9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724069112154105546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8,PodSandboxId:d910ec1f8211fbe2e23253112e6974521933f0db65d3e3e8470a791a1b7f30f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724069112151951349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095,PodSandboxId:46c1fa2472ada3193201d93b313fd9a798e176a3b649b066feb31979be14d261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724069112118662945,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ac53c658efe4ba8ffd0e2f236df130,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3,PodSandboxId:6c177bf2d7c23ae97634bb26dd8c262706613a305321e8939b6a026f3da9b70b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724069112101558818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0971c099a1aff8cbb98d5ae870a6c648,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a4487882fdf97510aa26c8a63bf0a9cff43cd1b753731dcd86ef3ee10d9fda,PodSandboxId:71a902acf5f3a3225a11e0745ac28efd74020edc4921a53c46aa2a8fa686f63a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724068784030468860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kjbkv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05cf6615-10e8-43a0-a730-d56145296a11,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d,PodSandboxId:7c5eedd9bea6b1c9f8de0fb57cb2d72b913b1d5335fb9fd5f46f7f1acfd1f9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724068731196453193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qfdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fbfb07-192b-46ad-9bd4-8de7672df209,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecdda5f9f767c862cb4ab67adc13b179d56fcf8fb1c01189a32546fb2fdcff,PodSandboxId:8eed3e1a479d2a6233444c1757b87c2826c6efc916138f94b2baf8374e1a65a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724068731176049339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 28ff9c00-d89b-4078-a7f0-855458417ee5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f,PodSandboxId:7d94180b7a43f7458998f458273e52f4df8d577f672f8572d96f9517aa556e78,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724068719429242208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2k549,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 1fe239f5-81f4-4acf-899d-de128f526516,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f,PodSandboxId:6c2448e0084ce4bd5016ad12939e878310178d35c704f948d6bcf0fa874c23cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724068716927547819,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjdfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: aaaa7294-63fb-407c-9dbe-a8282d6edcda,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174,PodSandboxId:d3dfabfedbc5351307fd792dc8f6be5f3a1e332fb14ac03b247fd5a3ead1a22e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724068705290622607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441b82c5559899a31d7270882afc7793,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994,PodSandboxId:732035bad80065dc50e7eb2613259ffa26657cc179f49da4ada12b4d46f16848,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724068705236863470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-320821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8770efc46e477d513aa9f98adb24a89b,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9295f798-78c9-410b-8f32-2c574ede1beb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9c65bc9be79f6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   2dee2b5f29bd3       busybox-7dff88458-kjbkv
	07d46285b3356       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   13db2fe11ae92       coredns-6f6b679f8f-qfdh2
	d2b8e14212832       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   5e7bfdf817d63       kindnet-2k549
	09611480af58a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   0ecf0f0a48519       kube-proxy-kjdfp
	e4d3a7ba2d600       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   e4d12cf76f6cf       storage-provisioner
	1459312e96b1b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            2                   46c1fa2472ada       kube-apiserver-multinode-320821
	e16be7aeb3a43       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   6c177bf2d7c23       kube-controller-manager-multinode-320821
	aa19842caefbb       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   6a8470ecef3c4       kube-scheduler-multinode-320821
	99df12b846dbb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   d910ec1f8211f       etcd-multinode-320821
	e500f6d0181c7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Exited              kube-apiserver            1                   46c1fa2472ada       kube-apiserver-multinode-320821
	eea00494de0eb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Exited              kube-controller-manager   1                   6c177bf2d7c23       kube-controller-manager-multinode-320821
	b1a4487882fdf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   71a902acf5f3a       busybox-7dff88458-kjbkv
	9a56e4a12c9aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   7c5eedd9bea6b       coredns-6f6b679f8f-qfdh2
	8aecdda5f9f76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   8eed3e1a479d2       storage-provisioner
	821073acd978a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   7d94180b7a43f       kindnet-2k549
	16f3a27e9da94       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   6c2448e0084ce       kube-proxy-kjdfp
	3c5f6b536a88e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   d3dfabfedbc53       etcd-multinode-320821
	91cf874d8d0dc       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   732035bad8006       kube-scheduler-multinode-320821
	
	
	==> coredns [07d46285b33561282b00d12e30d0ac2eb7f61e9c16bd377657dbe56721cf8d28] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60634 - 35121 "HINFO IN 810219209637135985.1649414912979778859. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020888118s
	
	
	==> coredns [9a56e4a12c9aa8da01773547f9e365d4993ecac11cf16d4120c72ec977233f9d] <==
	[INFO] 10.244.0.3:51826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002004814s
	[INFO] 10.244.0.3:46473 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013555s
	[INFO] 10.244.0.3:58803 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058241s
	[INFO] 10.244.0.3:53867 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001380791s
	[INFO] 10.244.0.3:46837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000055562s
	[INFO] 10.244.0.3:40851 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066989s
	[INFO] 10.244.0.3:50911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051093s
	[INFO] 10.244.1.2:45446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128004s
	[INFO] 10.244.1.2:40145 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072366s
	[INFO] 10.244.1.2:36494 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057146s
	[INFO] 10.244.1.2:59410 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006181s
	[INFO] 10.244.0.3:59393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221636s
	[INFO] 10.244.0.3:35642 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098423s
	[INFO] 10.244.0.3:46741 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007295s
	[INFO] 10.244.0.3:34925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063971s
	[INFO] 10.244.1.2:49528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186733s
	[INFO] 10.244.1.2:53532 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099828s
	[INFO] 10.244.1.2:60063 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135981s
	[INFO] 10.244.1.2:41718 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073492s
	[INFO] 10.244.0.3:52028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110627s
	[INFO] 10.244.0.3:42457 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093285s
	[INFO] 10.244.0.3:49187 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079584s
	[INFO] 10.244.0.3:40837 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101477s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-320821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-320821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=multinode-320821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_58_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:58:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-320821
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:09:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:05:16 +0000   Mon, 19 Aug 2024 11:58:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:05:16 +0000   Mon, 19 Aug 2024 11:58:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:05:16 +0000   Mon, 19 Aug 2024 11:58:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:05:16 +0000   Mon, 19 Aug 2024 11:58:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-320821
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d0023682c8b48b8b70516eeb6bb51ff
	  System UUID:                8d002368-2c8b-48b8-b705-16eeb6bb51ff
	  Boot ID:                    697c49aa-d957-4a31-8dc7-082016d87e90
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kjbkv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 coredns-6f6b679f8f-qfdh2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-320821                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-2k549                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-320821             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-320821    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-kjdfp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-320821             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-320821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-320821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-320821 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-320821 event: Registered Node multinode-320821 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-320821 status is now: NodeReady
	  Normal  Starting                 4m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node multinode-320821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node multinode-320821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node multinode-320821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node multinode-320821 event: Registered Node multinode-320821 in Controller
	
	
	Name:               multinode-320821-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-320821-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=multinode-320821
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T12_05_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:05:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-320821-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:06:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 12:06:26 +0000   Mon, 19 Aug 2024 12:07:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 12:06:26 +0000   Mon, 19 Aug 2024 12:07:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 12:06:26 +0000   Mon, 19 Aug 2024 12:07:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 12:06:26 +0000   Mon, 19 Aug 2024 12:07:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-320821-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 edb5d3af9cb941e2adfe6ed1ee25cd2e
	  System UUID:                edb5d3af-9cb9-41e2-adfe-6ed1ee25cd2e
	  Boot ID:                    3a87b408-22fc-4621-9ac5-4df7c8574c8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5k84p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 kindnet-nxv2m              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m59s
	  kube-system                 kube-proxy-sg6jr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m53s                  kube-proxy       
	  Normal  Starting                 3m17s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m59s (x2 over 9m59s)  kubelet          Node multinode-320821-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x2 over 9m59s)  kubelet          Node multinode-320821-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m59s (x2 over 9m59s)  kubelet          Node multinode-320821-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m39s                  kubelet          Node multinode-320821-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-320821-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-320821-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-320821-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m19s                  node-controller  Node multinode-320821-m02 event: Registered Node multinode-320821-m02 in Controller
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-320821-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-320821-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.050547] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.186868] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.119222] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.272560] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +3.987950] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.415121] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.058127] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.478399] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.091256] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.110170] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.141556] kauditd_printk_skb: 18 callbacks suppressed
	[ +15.369666] kauditd_printk_skb: 69 callbacks suppressed
	[Aug19 11:59] kauditd_printk_skb: 12 callbacks suppressed
	[Aug19 12:04] systemd-fstab-generator[2669]: Ignoring "noauto" option for root device
	[Aug19 12:05] systemd-fstab-generator[2681]: Ignoring "noauto" option for root device
	[  +0.168617] systemd-fstab-generator[2695]: Ignoring "noauto" option for root device
	[  +0.135374] systemd-fstab-generator[2707]: Ignoring "noauto" option for root device
	[  +0.302569] systemd-fstab-generator[2735]: Ignoring "noauto" option for root device
	[ +10.496841] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.085329] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.462336] systemd-fstab-generator[3261]: Ignoring "noauto" option for root device
	[  +3.734685] kauditd_printk_skb: 88 callbacks suppressed
	[ +13.527636] systemd-fstab-generator[3922]: Ignoring "noauto" option for root device
	[  +0.092846] kauditd_printk_skb: 34 callbacks suppressed
	[ +20.090215] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3c5f6b536a88ec232e63931b399ca3a32d410e628565d56cab9cfba5d5335174] <==
	{"level":"info","ts":"2024-08-19T11:58:26.325296Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:58:26.328274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T11:58:26.345302Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:58:26.351235Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.88:2379"}
	{"level":"info","ts":"2024-08-19T11:58:26.354938Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T11:58:26.356569Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T11:59:19.556144Z","caller":"traceutil/trace.go:171","msg":"trace[1761422469] linearizableReadLoop","detail":"{readStateIndex:461; appliedIndex:460; }","duration":"141.883131ms","start":"2024-08-19T11:59:19.414243Z","end":"2024-08-19T11:59:19.556126Z","steps":["trace[1761422469] 'read index received'  (duration: 122.825883ms)","trace[1761422469] 'applied index is now lower than readState.Index'  (duration: 19.05682ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T11:59:19.556249Z","caller":"traceutil/trace.go:171","msg":"trace[684930308] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"142.576536ms","start":"2024-08-19T11:59:19.413665Z","end":"2024-08-19T11:59:19.556242Z","steps":["trace[684930308] 'process raft request'  (duration: 123.44387ms)","trace[684930308] 'compare'  (duration: 18.950997ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T11:59:19.556612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.253782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-320821-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T11:59:19.556677Z","caller":"traceutil/trace.go:171","msg":"trace[443472046] range","detail":"{range_begin:/registry/minions/multinode-320821-m02; range_end:; response_count:0; response_revision:441; }","duration":"142.434123ms","start":"2024-08-19T11:59:19.414233Z","end":"2024-08-19T11:59:19.556667Z","steps":["trace[443472046] 'agreement among raft nodes before linearized reading'  (duration: 142.199035ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T11:59:19.556817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.077221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.88\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-08-19T11:59:19.556844Z","caller":"traceutil/trace.go:171","msg":"trace[2144411410] range","detail":"{range_begin:/registry/masterleases/192.168.39.88; range_end:; response_count:1; response_revision:441; }","duration":"119.106314ms","start":"2024-08-19T11:59:19.437732Z","end":"2024-08-19T11:59:19.556839Z","steps":["trace[2144411410] 'agreement among raft nodes before linearized reading'  (duration: 119.058549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:00:14.737620Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.664633ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16262376666070078341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-320821-m03.17ed1f73c629124f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-320821-m03.17ed1f73c629124f\" value_size:642 lease:7039004629215302234 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T12:00:14.738175Z","caller":"traceutil/trace.go:171","msg":"trace[137775970] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"257.060325ms","start":"2024-08-19T12:00:14.481102Z","end":"2024-08-19T12:00:14.738162Z","steps":["trace[137775970] 'process raft request'  (duration: 75.389352ms)","trace[137775970] 'compare'  (duration: 180.582195ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T12:00:21.199850Z","caller":"traceutil/trace.go:171","msg":"trace[185200115] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"236.387522ms","start":"2024-08-19T12:00:20.963442Z","end":"2024-08-19T12:00:21.199829Z","steps":["trace[185200115] 'process raft request'  (duration: 194.409482ms)","trace[185200115] 'compare'  (duration: 41.885571ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T12:03:28.870132Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T12:03:28.870196Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-320821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	{"level":"warn","ts":"2024-08-19T12:03:28.870269Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:03:28.871117Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:03:28.955947Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:03:28.955993Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T12:03:28.956057Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aa0bd43d5988e1af","current-leader-member-id":"aa0bd43d5988e1af"}
	{"level":"info","ts":"2024-08-19T12:03:28.958747Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-08-19T12:03:28.958875Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-08-19T12:03:28.958899Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-320821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	
	
	==> etcd [99df12b846dbbd7da46c0177c9008c82b67ee6388e81bf59dc517c74a90a08c8] <==
	{"level":"info","ts":"2024-08-19T12:05:12.561576Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9f9d2ecdb39156b6","local-member-id":"aa0bd43d5988e1af","added-peer-id":"aa0bd43d5988e1af","added-peer-peer-urls":["https://192.168.39.88:2380"]}
	{"level":"info","ts":"2024-08-19T12:05:12.561723Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f9d2ecdb39156b6","local-member-id":"aa0bd43d5988e1af","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:05:12.561779Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:05:12.581245Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:05:12.581457Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aa0bd43d5988e1af","initial-advertise-peer-urls":["https://192.168.39.88:2380"],"listen-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T12:05:12.581495Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T12:05:12.581628Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-08-19T12:05:12.581651Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-08-19T12:05:14.246032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T12:05:14.246092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:05:14.246115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgPreVoteResp from aa0bd43d5988e1af at term 2"}
	{"level":"info","ts":"2024-08-19T12:05:14.246128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:05:14.246135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgVoteResp from aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-08-19T12:05:14.246144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:05:14.246150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aa0bd43d5988e1af elected leader aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-08-19T12:05:14.250094Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aa0bd43d5988e1af","local-member-attributes":"{Name:multinode-320821 ClientURLs:[https://192.168.39.88:2379]}","request-path":"/0/members/aa0bd43d5988e1af/attributes","cluster-id":"9f9d2ecdb39156b6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:05:14.250252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:05:14.250501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:05:14.251181Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:05:14.251997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:05:14.252560Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:05:14.253230Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.88:2379"}
	{"level":"info","ts":"2024-08-19T12:05:14.255097Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:05:14.255119Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:06:03.626026Z","caller":"traceutil/trace.go:171","msg":"trace[1061296087] transaction","detail":"{read_only:false; response_revision:1033; number_of_response:1; }","duration":"200.966712ms","start":"2024-08-19T12:06:03.425046Z","end":"2024-08-19T12:06:03.626012Z","steps":["trace[1061296087] 'process raft request'  (duration: 200.8456ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:09:18 up 11 min,  0 users,  load average: 0.08, 0.22, 0.16
	Linux multinode-320821 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [821073acd978a713b44069c4ae2ff6c599abf7108c707291b6495efed4a4318f] <==
	I0819 12:02:40.429306       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:02:50.429310       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:02:50.429347       1 main.go:299] handling current node
	I0819 12:02:50.429366       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:02:50.429373       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:02:50.429574       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:02:50.429604       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:03:00.431392       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:03:00.431558       1 main.go:299] handling current node
	I0819 12:03:00.431597       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:03:00.431620       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:03:00.431782       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:03:00.431808       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:03:10.434115       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:03:10.434145       1 main.go:299] handling current node
	I0819 12:03:10.434159       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:03:10.434164       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:03:10.434311       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:03:10.434340       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	I0819 12:03:20.432646       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:03:20.432678       1 main.go:299] handling current node
	I0819 12:03:20.432693       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:03:20.432698       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:03:20.432821       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0819 12:03:20.432826       1 main.go:322] Node multinode-320821-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [d2b8e142128325bd373fefb2716f98caf1c3bbf0a0954bc8530631e3370474a7] <==
	I0819 12:08:18.639635       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:08:28.647340       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:08:28.647443       1 main.go:299] handling current node
	I0819 12:08:28.647476       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:08:28.647495       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:08:38.647822       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:08:38.647869       1 main.go:299] handling current node
	I0819 12:08:38.647884       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:08:38.647890       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:08:48.638787       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:08:48.638907       1 main.go:299] handling current node
	I0819 12:08:48.638938       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:08:48.638957       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:08:58.639130       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:08:58.639185       1 main.go:299] handling current node
	I0819 12:08:58.639202       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:08:58.639208       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:09:08.646405       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:09:08.646499       1 main.go:299] handling current node
	I0819 12:09:08.646572       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:09:08.646596       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:09:18.639361       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 12:09:18.639400       1 main.go:322] Node multinode-320821-m02 has CIDR [10.244.1.0/24] 
	I0819 12:09:18.639552       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0819 12:09:18.639582       1 main.go:299] handling current node
	
	
	==> kube-apiserver [1459312e96b1bf6a7864167625e6450505699b7c7f0ca669e14fac2299e3ac16] <==
	I0819 12:05:16.505740       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:05:16.505777       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:05:16.506146       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:05:16.506367       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:05:16.508911       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:05:16.510428       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:05:16.510778       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:05:16.511327       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:05:16.512008       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:05:16.512085       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:05:16.512111       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:05:16.513374       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:05:16.521571       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 12:05:16.589431       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:05:16.592815       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:05:16.592849       1 policy_source.go:224] refreshing policies
	I0819 12:05:16.633911       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:05:17.407956       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:05:18.779074       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:05:18.928469       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:05:18.942981       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:05:19.022742       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:05:19.034801       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:05:19.826869       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:05:20.120796       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e500f6d0181c7df702cc713c901a9426693e3a6f8fbbc8bf54adaaa27fe2a095] <==
	
	
	==> kube-controller-manager [e16be7aeb3a437e5481a7e52a1ece6d8752be78153f8fd6b771bdf436021709f] <==
	I0819 12:06:34.117924       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-320821-m02"
	I0819 12:06:34.150327       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-320821-m03" podCIDRs=["10.244.2.0/24"]
	I0819 12:06:34.151055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:34.151180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:34.161074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:34.488547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:34.991945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:44.422753       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:51.907128       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-320821-m03"
	I0819 12:06:51.907307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:51.918890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:54.915279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:56.566664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:56.580083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:06:56.933705       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-320821-m02"
	I0819 12:06:56.933797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m03"
	I0819 12:07:39.932698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m02"
	I0819 12:07:39.955455       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m02"
	I0819 12:07:39.968680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.323075ms"
	I0819 12:07:39.968862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="97.838µs"
	I0819 12:07:45.054656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-320821-m02"
	I0819 12:07:59.781767       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bvdxj"
	I0819 12:07:59.807505       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bvdxj"
	I0819 12:07:59.807579       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dvqkr"
	I0819 12:07:59.829394       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dvqkr"
	
	
	==> kube-controller-manager [eea00494de0eb0945f4379b0a739e42c6da67d90e4b045d91b7ba42e5847b3a3] <==
	
	
	==> kube-proxy [09611480af58a72299f384042adf2489546cb1d7b2d26eb694022bc29650ad95] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:05:18.110824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:05:18.125091       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	E0819 12:05:18.125248       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:05:18.165945       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:05:18.166005       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:05:18.166033       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:05:18.169350       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:05:18.169626       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:05:18.169648       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:05:18.171112       1 config.go:197] "Starting service config controller"
	I0819 12:05:18.171135       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:05:18.171151       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:05:18.171155       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:05:18.171503       1 config.go:326] "Starting node config controller"
	I0819 12:05:18.171563       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:05:18.271303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:05:18.271361       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:05:18.271675       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [16f3a27e9da94d6d5999070bcb91d70d5ea1fec1ecc6eaad5b7fff4fce98647f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 11:58:37.080938       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 11:58:37.092865       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	E0819 11:58:37.092926       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:58:37.127211       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 11:58:37.127260       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 11:58:37.127319       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:58:37.130138       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:58:37.130405       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:58:37.130429       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:58:37.131799       1 config.go:197] "Starting service config controller"
	I0819 11:58:37.131847       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:58:37.131868       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:58:37.131872       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:58:37.132323       1 config.go:326] "Starting node config controller"
	I0819 11:58:37.132351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:58:37.231956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 11:58:37.232018       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:58:37.232464       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [91cf874d8d0dc8f26b5bdb542f341a75ed323b886d15b9338c14855fdb890994] <==
	E0819 11:58:27.802376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:27.802420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:58:27.802445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.615697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:58:28.615867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.626250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:58:28.626348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.646865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:58:28.647399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.652970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 11:58:28.653580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.668729       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 11:58:28.669563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.874798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 11:58:28.874845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:28.984289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 11:58:28.984336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:29.014738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:58:29.014915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:29.041902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 11:58:29.042084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:58:29.070952       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:58:29.071075       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 11:58:32.292055       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:03:28.865277       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa19842caefbbec7549b2783ff78a002c7ddbbfb24b013cd0851f96812bfc4be] <==
	W0819 12:05:14.159224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.88:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.159279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.88:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.213032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.88:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.213089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.88:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.280051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.88:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.280106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.88:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.288911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.88:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.288966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.88:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.322754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.88:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.322817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.88:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.375112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.88:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.375201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.88:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.386877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.88:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.386943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.88:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.471245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.88:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.471383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.88:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.557812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.88:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.557899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.88:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:14.642295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.88:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.88:8443: connect: connection refused
	E0819 12:05:14.642377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.88:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.88:8443: connect: connection refused" logger="UnhandledError"
	W0819 12:05:16.466118       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:05:16.466283       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 12:05:16.511722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:05:16.511769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 12:05:21.129196       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:08:04 multinode-320821 kubelet[3268]: E0819 12:08:04.271037    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069284270152901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:14 multinode-320821 kubelet[3268]: E0819 12:08:14.227788    3268 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:08:14 multinode-320821 kubelet[3268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:08:14 multinode-320821 kubelet[3268]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:08:14 multinode-320821 kubelet[3268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:08:14 multinode-320821 kubelet[3268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:08:14 multinode-320821 kubelet[3268]: E0819 12:08:14.272385    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069294272046357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:14 multinode-320821 kubelet[3268]: E0819 12:08:14.272449    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069294272046357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:24 multinode-320821 kubelet[3268]: E0819 12:08:24.275382    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069304274402475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:24 multinode-320821 kubelet[3268]: E0819 12:08:24.276139    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069304274402475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:34 multinode-320821 kubelet[3268]: E0819 12:08:34.277788    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069314277402493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:34 multinode-320821 kubelet[3268]: E0819 12:08:34.277821    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069314277402493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:44 multinode-320821 kubelet[3268]: E0819 12:08:44.280347    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069324279783611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:44 multinode-320821 kubelet[3268]: E0819 12:08:44.280410    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069324279783611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:54 multinode-320821 kubelet[3268]: E0819 12:08:54.283023    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069334281694740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:08:54 multinode-320821 kubelet[3268]: E0819 12:08:54.283299    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069334281694740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:09:04 multinode-320821 kubelet[3268]: E0819 12:09:04.285579    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069344285144878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:09:04 multinode-320821 kubelet[3268]: E0819 12:09:04.285602    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069344285144878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:09:14 multinode-320821 kubelet[3268]: E0819 12:09:14.227594    3268 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 12:09:14 multinode-320821 kubelet[3268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 12:09:14 multinode-320821 kubelet[3268]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 12:09:14 multinode-320821 kubelet[3268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 12:09:14 multinode-320821 kubelet[3268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 12:09:14 multinode-320821 kubelet[3268]: E0819 12:09:14.288145    3268 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069354287396688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:09:14 multinode-320821 kubelet[3268]: E0819 12:09:14.288184    3268 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069354287396688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:09:17.729688  141331 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19476-99410/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-320821 -n multinode-320821
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-320821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.21s)

                                                
                                    
x
+
TestPreload (167.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-967295 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0819 12:13:35.347634  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-967295 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.518687563s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-967295 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-967295 image pull gcr.io/k8s-minikube/busybox: (1.923680757s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-967295
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-967295: (7.296195439s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-967295 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-967295 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (59.550099716s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-967295 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-19 12:15:55.249100601 +0000 UTC m=+5455.986674418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-967295 -n test-preload-967295
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-967295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-967295 logs -n 25: (1.093560133s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821 sudo cat                                       | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m03_multinode-320821.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt                       | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m02:/home/docker/cp-test_multinode-320821-m03_multinode-320821-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n                                                                 | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | multinode-320821-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-320821 ssh -n multinode-320821-m02 sudo cat                                   | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	|         | /home/docker/cp-test_multinode-320821-m03_multinode-320821-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-320821 node stop m03                                                          | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:00 UTC |
	| node    | multinode-320821 node start                                                             | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:00 UTC | 19 Aug 24 12:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-320821                                                                | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC |                     |
	| stop    | -p multinode-320821                                                                     | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:01 UTC |                     |
	| start   | -p multinode-320821                                                                     | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:03 UTC | 19 Aug 24 12:06 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-320821                                                                | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:06 UTC |                     |
	| node    | multinode-320821 node delete                                                            | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:06 UTC | 19 Aug 24 12:06 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-320821 stop                                                                   | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:06 UTC |                     |
	| start   | -p multinode-320821                                                                     | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:09 UTC | 19 Aug 24 12:12 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-320821                                                                | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:12 UTC |                     |
	| start   | -p multinode-320821-m02                                                                 | multinode-320821-m02 | jenkins | v1.33.1 | 19 Aug 24 12:12 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-320821-m03                                                                 | multinode-320821-m03 | jenkins | v1.33.1 | 19 Aug 24 12:12 UTC | 19 Aug 24 12:13 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-320821                                                                 | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:13 UTC |                     |
	| delete  | -p multinode-320821-m03                                                                 | multinode-320821-m03 | jenkins | v1.33.1 | 19 Aug 24 12:13 UTC | 19 Aug 24 12:13 UTC |
	| delete  | -p multinode-320821                                                                     | multinode-320821     | jenkins | v1.33.1 | 19 Aug 24 12:13 UTC | 19 Aug 24 12:13 UTC |
	| start   | -p test-preload-967295                                                                  | test-preload-967295  | jenkins | v1.33.1 | 19 Aug 24 12:13 UTC | 19 Aug 24 12:14 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-967295 image pull                                                          | test-preload-967295  | jenkins | v1.33.1 | 19 Aug 24 12:14 UTC | 19 Aug 24 12:14 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-967295                                                                  | test-preload-967295  | jenkins | v1.33.1 | 19 Aug 24 12:14 UTC | 19 Aug 24 12:14 UTC |
	| start   | -p test-preload-967295                                                                  | test-preload-967295  | jenkins | v1.33.1 | 19 Aug 24 12:14 UTC | 19 Aug 24 12:15 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-967295 image list                                                          | test-preload-967295  | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:14:55
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:14:55.521291  143731 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:14:55.521420  143731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:14:55.521428  143731 out.go:358] Setting ErrFile to fd 2...
	I0819 12:14:55.521433  143731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:14:55.521589  143731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:14:55.522213  143731 out.go:352] Setting JSON to false
	I0819 12:14:55.523155  143731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7041,"bootTime":1724062654,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:14:55.523217  143731 start.go:139] virtualization: kvm guest
	I0819 12:14:55.525313  143731 out.go:177] * [test-preload-967295] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:14:55.526599  143731 notify.go:220] Checking for updates...
	I0819 12:14:55.526636  143731 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:14:55.528044  143731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:14:55.529337  143731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:14:55.530625  143731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:14:55.531708  143731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:14:55.532898  143731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:14:55.534366  143731 config.go:182] Loaded profile config "test-preload-967295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 12:14:55.534820  143731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:55.534895  143731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:55.549917  143731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I0819 12:14:55.550398  143731 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:55.551028  143731 main.go:141] libmachine: Using API Version  1
	I0819 12:14:55.551050  143731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:55.551434  143731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:55.551742  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:14:55.553598  143731 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 12:14:55.554910  143731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:14:55.555365  143731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:55.555417  143731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:55.570853  143731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40857
	I0819 12:14:55.572378  143731 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:55.572914  143731 main.go:141] libmachine: Using API Version  1
	I0819 12:14:55.572941  143731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:55.573282  143731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:55.573490  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:14:55.609621  143731 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:14:55.610914  143731 start.go:297] selected driver: kvm2
	I0819 12:14:55.610940  143731 start.go:901] validating driver "kvm2" against &{Name:test-preload-967295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-967295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:14:55.611053  143731 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:14:55.611813  143731 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:14:55.611913  143731 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:14:55.627459  143731 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:14:55.627858  143731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:14:55.627928  143731 cni.go:84] Creating CNI manager for ""
	I0819 12:14:55.627938  143731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:14:55.627984  143731 start.go:340] cluster config:
	{Name:test-preload-967295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-967295 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:14:55.628083  143731 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:14:55.629842  143731 out.go:177] * Starting "test-preload-967295" primary control-plane node in "test-preload-967295" cluster
	I0819 12:14:55.630977  143731 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 12:14:55.656224  143731 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0819 12:14:55.656259  143731 cache.go:56] Caching tarball of preloaded images
	I0819 12:14:55.656407  143731 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 12:14:55.658131  143731 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0819 12:14:55.659297  143731 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 12:14:55.684437  143731 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0819 12:14:59.108397  143731 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 12:14:59.108502  143731 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0819 12:14:59.963561  143731 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0819 12:14:59.963685  143731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/config.json ...
	I0819 12:14:59.963921  143731 start.go:360] acquireMachinesLock for test-preload-967295: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:14:59.963983  143731 start.go:364] duration metric: took 39.919µs to acquireMachinesLock for "test-preload-967295"
	I0819 12:14:59.963999  143731 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:14:59.964007  143731 fix.go:54] fixHost starting: 
	I0819 12:14:59.964304  143731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:14:59.964329  143731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:14:59.979153  143731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0819 12:14:59.979701  143731 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:14:59.980230  143731 main.go:141] libmachine: Using API Version  1
	I0819 12:14:59.980252  143731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:14:59.980643  143731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:14:59.980871  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:14:59.981039  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetState
	I0819 12:14:59.982732  143731 fix.go:112] recreateIfNeeded on test-preload-967295: state=Stopped err=<nil>
	I0819 12:14:59.982759  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	W0819 12:14:59.982940  143731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:14:59.985003  143731 out.go:177] * Restarting existing kvm2 VM for "test-preload-967295" ...
	I0819 12:14:59.986453  143731 main.go:141] libmachine: (test-preload-967295) Calling .Start
	I0819 12:14:59.986691  143731 main.go:141] libmachine: (test-preload-967295) Ensuring networks are active...
	I0819 12:14:59.987603  143731 main.go:141] libmachine: (test-preload-967295) Ensuring network default is active
	I0819 12:14:59.987900  143731 main.go:141] libmachine: (test-preload-967295) Ensuring network mk-test-preload-967295 is active
	I0819 12:14:59.988290  143731 main.go:141] libmachine: (test-preload-967295) Getting domain xml...
	I0819 12:14:59.989049  143731 main.go:141] libmachine: (test-preload-967295) Creating domain...
	I0819 12:15:01.202324  143731 main.go:141] libmachine: (test-preload-967295) Waiting to get IP...
	I0819 12:15:01.203202  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:01.203580  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:01.203653  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:01.203566  143784 retry.go:31] will retry after 189.027024ms: waiting for machine to come up
	I0819 12:15:01.394163  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:01.394584  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:01.394612  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:01.394556  143784 retry.go:31] will retry after 339.661718ms: waiting for machine to come up
	I0819 12:15:01.736306  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:01.736721  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:01.736752  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:01.736673  143784 retry.go:31] will retry after 331.619446ms: waiting for machine to come up
	I0819 12:15:02.070242  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:02.070739  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:02.070782  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:02.070627  143784 retry.go:31] will retry after 609.211449ms: waiting for machine to come up
	I0819 12:15:02.681916  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:02.682404  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:02.682436  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:02.682352  143784 retry.go:31] will retry after 572.277434ms: waiting for machine to come up
	I0819 12:15:03.256108  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:03.256574  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:03.256598  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:03.256515  143784 retry.go:31] will retry after 749.332666ms: waiting for machine to come up
	I0819 12:15:04.007570  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:04.008170  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:04.008196  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:04.008100  143784 retry.go:31] will retry after 1.149977609s: waiting for machine to come up
	I0819 12:15:05.160038  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:05.160562  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:05.160592  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:05.160508  143784 retry.go:31] will retry after 935.384734ms: waiting for machine to come up
	I0819 12:15:06.097667  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:06.098104  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:06.098135  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:06.098042  143784 retry.go:31] will retry after 1.140520093s: waiting for machine to come up
	I0819 12:15:07.240463  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:07.240971  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:07.240996  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:07.240918  143784 retry.go:31] will retry after 2.156177837s: waiting for machine to come up
	I0819 12:15:09.398347  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:09.398705  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:09.398734  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:09.398651  143784 retry.go:31] will retry after 2.329849956s: waiting for machine to come up
	I0819 12:15:11.731351  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:11.731811  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:11.731842  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:11.731756  143784 retry.go:31] will retry after 2.556172377s: waiting for machine to come up
	I0819 12:15:14.291544  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:14.291982  143731 main.go:141] libmachine: (test-preload-967295) DBG | unable to find current IP address of domain test-preload-967295 in network mk-test-preload-967295
	I0819 12:15:14.292017  143731 main.go:141] libmachine: (test-preload-967295) DBG | I0819 12:15:14.291919  143784 retry.go:31] will retry after 3.725862643s: waiting for machine to come up
	I0819 12:15:18.021096  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.021586  143731 main.go:141] libmachine: (test-preload-967295) Found IP for machine: 192.168.39.161
	I0819 12:15:18.021614  143731 main.go:141] libmachine: (test-preload-967295) Reserving static IP address...
	I0819 12:15:18.021630  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has current primary IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.022263  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "test-preload-967295", mac: "52:54:00:2f:de:40", ip: "192.168.39.161"} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.022288  143731 main.go:141] libmachine: (test-preload-967295) DBG | skip adding static IP to network mk-test-preload-967295 - found existing host DHCP lease matching {name: "test-preload-967295", mac: "52:54:00:2f:de:40", ip: "192.168.39.161"}
	I0819 12:15:18.022303  143731 main.go:141] libmachine: (test-preload-967295) Reserved static IP address: 192.168.39.161
	I0819 12:15:18.022316  143731 main.go:141] libmachine: (test-preload-967295) Waiting for SSH to be available...
	I0819 12:15:18.022328  143731 main.go:141] libmachine: (test-preload-967295) DBG | Getting to WaitForSSH function...
	I0819 12:15:18.024460  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.024806  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.024836  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.024951  143731 main.go:141] libmachine: (test-preload-967295) DBG | Using SSH client type: external
	I0819 12:15:18.024993  143731 main.go:141] libmachine: (test-preload-967295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/test-preload-967295/id_rsa (-rw-------)
	I0819 12:15:18.025079  143731 main.go:141] libmachine: (test-preload-967295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/test-preload-967295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:15:18.025107  143731 main.go:141] libmachine: (test-preload-967295) DBG | About to run SSH command:
	I0819 12:15:18.025132  143731 main.go:141] libmachine: (test-preload-967295) DBG | exit 0
	I0819 12:15:18.147893  143731 main.go:141] libmachine: (test-preload-967295) DBG | SSH cmd err, output: <nil>: 
	I0819 12:15:18.148272  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetConfigRaw
	I0819 12:15:18.149013  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetIP
	I0819 12:15:18.151460  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.151933  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.151965  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.152248  143731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/config.json ...
	I0819 12:15:18.152509  143731 machine.go:93] provisionDockerMachine start ...
	I0819 12:15:18.152531  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:15:18.152744  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:18.154984  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.155347  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.155377  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.155490  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:18.155673  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.155842  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.155986  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:18.156119  143731 main.go:141] libmachine: Using SSH client type: native
	I0819 12:15:18.156310  143731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0819 12:15:18.156319  143731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:15:18.260060  143731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 12:15:18.260088  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetMachineName
	I0819 12:15:18.260386  143731 buildroot.go:166] provisioning hostname "test-preload-967295"
	I0819 12:15:18.260419  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetMachineName
	I0819 12:15:18.260631  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:18.263042  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.263557  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.263587  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.263838  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:18.264040  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.264208  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.264343  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:18.264515  143731 main.go:141] libmachine: Using SSH client type: native
	I0819 12:15:18.264687  143731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0819 12:15:18.264699  143731 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-967295 && echo "test-preload-967295" | sudo tee /etc/hostname
	I0819 12:15:18.381238  143731 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-967295
	
	I0819 12:15:18.381265  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:18.383939  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.384350  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.384372  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.384647  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:18.384880  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.385062  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.385224  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:18.385369  143731 main.go:141] libmachine: Using SSH client type: native
	I0819 12:15:18.385549  143731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0819 12:15:18.385566  143731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-967295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-967295/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-967295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:15:18.496435  143731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:15:18.496465  143731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:15:18.496499  143731 buildroot.go:174] setting up certificates
	I0819 12:15:18.496510  143731 provision.go:84] configureAuth start
	I0819 12:15:18.496523  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetMachineName
	I0819 12:15:18.496814  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetIP
	I0819 12:15:18.499581  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.500035  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.500075  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.500204  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:18.502192  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.502505  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.502535  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.502643  143731 provision.go:143] copyHostCerts
	I0819 12:15:18.502710  143731 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:15:18.502723  143731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:15:18.502798  143731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:15:18.502905  143731 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:15:18.502915  143731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:15:18.502955  143731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:15:18.503029  143731 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:15:18.503039  143731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:15:18.503073  143731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:15:18.503141  143731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.test-preload-967295 san=[127.0.0.1 192.168.39.161 localhost minikube test-preload-967295]
	I0819 12:15:18.632574  143731 provision.go:177] copyRemoteCerts
	I0819 12:15:18.632642  143731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:15:18.632677  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:18.635170  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.635499  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.635531  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.635662  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:18.635911  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.636065  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:18.636199  143731 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/test-preload-967295/id_rsa Username:docker}
	I0819 12:15:18.717612  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:15:18.740700  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 12:15:18.763394  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:15:18.785864  143731 provision.go:87] duration metric: took 289.33971ms to configureAuth
	I0819 12:15:18.785895  143731 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:15:18.786059  143731 config.go:182] Loaded profile config "test-preload-967295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 12:15:18.786133  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:18.788593  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.788958  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:18.788984  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:18.789242  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:18.789466  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.789659  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:18.789796  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:18.789974  143731 main.go:141] libmachine: Using SSH client type: native
	I0819 12:15:18.790140  143731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0819 12:15:18.790157  143731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:15:19.047210  143731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:15:19.047244  143731 machine.go:96] duration metric: took 894.71818ms to provisionDockerMachine
	I0819 12:15:19.047260  143731 start.go:293] postStartSetup for "test-preload-967295" (driver="kvm2")
	I0819 12:15:19.047277  143731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:15:19.047317  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:15:19.047687  143731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:15:19.047772  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:19.050584  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.051013  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:19.051037  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.051175  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:19.051388  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:19.051568  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:19.051705  143731 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/test-preload-967295/id_rsa Username:docker}
	I0819 12:15:19.134397  143731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:15:19.138446  143731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:15:19.138480  143731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:15:19.138567  143731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:15:19.138667  143731 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:15:19.138760  143731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:15:19.148167  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:15:19.171226  143731 start.go:296] duration metric: took 123.94747ms for postStartSetup
	I0819 12:15:19.171274  143731 fix.go:56] duration metric: took 19.207265893s for fixHost
	I0819 12:15:19.171311  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:19.173710  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.174018  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:19.174039  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.174226  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:19.174427  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:19.174592  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:19.174731  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:19.174880  143731 main.go:141] libmachine: Using SSH client type: native
	I0819 12:15:19.175045  143731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0819 12:15:19.175055  143731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:15:19.284292  143731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724069719.260339585
	
	I0819 12:15:19.284332  143731 fix.go:216] guest clock: 1724069719.260339585
	I0819 12:15:19.284340  143731 fix.go:229] Guest: 2024-08-19 12:15:19.260339585 +0000 UTC Remote: 2024-08-19 12:15:19.171288203 +0000 UTC m=+23.687915204 (delta=89.051382ms)
	I0819 12:15:19.284371  143731 fix.go:200] guest clock delta is within tolerance: 89.051382ms
	I0819 12:15:19.284377  143731 start.go:83] releasing machines lock for "test-preload-967295", held for 19.320384725s
	I0819 12:15:19.284395  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:15:19.284682  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetIP
	I0819 12:15:19.287022  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.287311  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:19.287342  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.287523  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:15:19.288072  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:15:19.288285  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:15:19.288378  143731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:15:19.288439  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:19.288531  143731 ssh_runner.go:195] Run: cat /version.json
	I0819 12:15:19.288560  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:19.290904  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.291112  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.291301  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:19.291327  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.291455  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:19.291486  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:19.291518  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:19.291643  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:19.291665  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:19.291833  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:19.291850  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:19.292029  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:19.292029  143731 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/test-preload-967295/id_rsa Username:docker}
	I0819 12:15:19.292204  143731 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/test-preload-967295/id_rsa Username:docker}
	I0819 12:15:19.369841  143731 ssh_runner.go:195] Run: systemctl --version
	I0819 12:15:19.392322  143731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:15:19.536674  143731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:15:19.542630  143731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:15:19.542718  143731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:15:19.561815  143731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:15:19.561846  143731 start.go:495] detecting cgroup driver to use...
	I0819 12:15:19.561926  143731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:15:19.581861  143731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:15:19.597274  143731 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:15:19.597363  143731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:15:19.612026  143731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:15:19.626716  143731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:15:19.735764  143731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:15:19.869913  143731 docker.go:233] disabling docker service ...
	I0819 12:15:19.870000  143731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:15:19.883588  143731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:15:19.896142  143731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:15:20.043098  143731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:15:20.155973  143731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:15:20.170062  143731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:15:20.188298  143731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0819 12:15:20.188370  143731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:15:20.198486  143731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:15:20.198598  143731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:15:20.208818  143731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:15:20.219119  143731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:15:20.229492  143731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:15:20.240152  143731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:15:20.250610  143731 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:15:20.267600  143731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:15:20.277828  143731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:15:20.287115  143731 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:15:20.287190  143731 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:15:20.300917  143731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:15:20.310586  143731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:15:20.427081  143731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:15:20.555040  143731 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:15:20.555126  143731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:15:20.559648  143731 start.go:563] Will wait 60s for crictl version
	I0819 12:15:20.559709  143731 ssh_runner.go:195] Run: which crictl
	I0819 12:15:20.563850  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:15:20.605635  143731 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:15:20.605712  143731 ssh_runner.go:195] Run: crio --version
	I0819 12:15:20.634071  143731 ssh_runner.go:195] Run: crio --version
	I0819 12:15:20.663625  143731 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0819 12:15:20.664863  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetIP
	I0819 12:15:20.667570  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:20.667959  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:20.667984  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:20.668232  143731 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:15:20.672347  143731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:15:20.684679  143731 kubeadm.go:883] updating cluster {Name:test-preload-967295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-967295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:15:20.684806  143731 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0819 12:15:20.684851  143731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:15:20.720671  143731 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0819 12:15:20.720757  143731 ssh_runner.go:195] Run: which lz4
	I0819 12:15:20.725025  143731 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 12:15:20.729114  143731 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 12:15:20.729152  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0819 12:15:22.210271  143731 crio.go:462] duration metric: took 1.485286769s to copy over tarball
	I0819 12:15:22.210366  143731 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 12:15:24.673320  143731 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.462920695s)
	I0819 12:15:24.673348  143731 crio.go:469] duration metric: took 2.46304662s to extract the tarball
	I0819 12:15:24.673358  143731 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 12:15:24.714133  143731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:15:24.756804  143731 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0819 12:15:24.756831  143731 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 12:15:24.756873  143731 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:15:24.756911  143731 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 12:15:24.756926  143731 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:15:24.756941  143731 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:15:24.756971  143731 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:15:24.756904  143731 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:15:24.756979  143731 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 12:15:24.756926  143731 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:15:24.758480  143731 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:15:24.758496  143731 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:15:24.758541  143731 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:15:24.758552  143731 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:15:24.758546  143731 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:15:24.758480  143731 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:15:24.758481  143731 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 12:15:24.758481  143731 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 12:15:24.920765  143731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:15:24.925027  143731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 12:15:24.926874  143731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 12:15:24.932216  143731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:15:24.934907  143731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:15:24.939059  143731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:15:24.946416  143731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:15:25.006504  143731 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0819 12:15:25.006583  143731 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:15:25.006637  143731 ssh_runner.go:195] Run: which crictl
	I0819 12:15:25.068104  143731 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0819 12:15:25.068160  143731 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 12:15:25.068217  143731 ssh_runner.go:195] Run: which crictl
	I0819 12:15:25.083525  143731 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0819 12:15:25.083571  143731 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0819 12:15:25.083582  143731 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0819 12:15:25.083605  143731 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0819 12:15:25.083638  143731 ssh_runner.go:195] Run: which crictl
	I0819 12:15:25.083656  143731 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:15:25.083708  143731 ssh_runner.go:195] Run: which crictl
	I0819 12:15:25.083610  143731 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:15:25.083791  143731 ssh_runner.go:195] Run: which crictl
	I0819 12:15:25.115688  143731 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0819 12:15:25.115752  143731 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0819 12:15:25.115795  143731 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:15:25.115845  143731 ssh_runner.go:195] Run: which crictl
	I0819 12:15:25.115851  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:15:25.115761  143731 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:15:25.115897  143731 ssh_runner.go:195] Run: which crictl
	I0819 12:15:25.115945  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 12:15:25.115952  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 12:15:25.115980  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:15:25.115995  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:15:25.184841  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:15:25.235795  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 12:15:25.237177  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:15:25.237234  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:15:25.237253  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:15:25.237272  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 12:15:25.237281  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:15:25.247165  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0819 12:15:25.358431  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0819 12:15:25.385481  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 12:15:25.385548  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:15:25.385608  143731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:15:25.392758  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 12:15:25.392789  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:15:25.392858  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0819 12:15:25.400033  143731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0819 12:15:25.400150  143731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 12:15:25.509941  143731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0819 12:15:25.510049  143731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 12:15:25.547815  143731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0819 12:15:25.547903  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0819 12:15:25.547927  143731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 12:15:25.648524  143731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 12:15:25.648526  143731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0819 12:15:25.648566  143731 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0819 12:15:25.648580  143731 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 12:15:25.648591  143731 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0819 12:15:25.648539  143731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0819 12:15:25.648605  143731 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0819 12:15:25.648650  143731 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0819 12:15:25.648654  143731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0819 12:15:25.648643  143731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 12:15:25.648714  143731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 12:15:25.648713  143731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 12:15:25.655153  143731 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0819 12:15:25.660439  143731 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0819 12:15:25.660494  143731 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0819 12:15:28.119002  143731 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (2.470406846s)
	I0819 12:15:28.119074  143731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0819 12:15:28.119014  143731 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.470343603s)
	I0819 12:15:28.119177  143731 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0819 12:15:28.119209  143731 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 12:15:28.119258  143731 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0819 12:15:28.119183  143731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 12:15:28.263851  143731 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0819 12:15:28.263861  143731 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0819 12:15:28.263908  143731 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 12:15:28.263965  143731 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0819 12:15:28.909736  143731 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0819 12:15:28.909791  143731 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 12:15:28.909860  143731 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0819 12:15:31.058203  143731 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.14831035s)
	I0819 12:15:31.058244  143731 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 12:15:31.058270  143731 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 12:15:31.058314  143731 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0819 12:15:31.499934  143731 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 12:15:31.499993  143731 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 12:15:31.500066  143731 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0819 12:15:32.240379  143731 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0819 12:15:32.240427  143731 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 12:15:32.240472  143731 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0819 12:15:33.085624  143731 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0819 12:15:33.085677  143731 cache_images.go:123] Successfully loaded all cached images
	I0819 12:15:33.085684  143731 cache_images.go:92] duration metric: took 8.328840792s to LoadCachedImages
	I0819 12:15:33.085702  143731 kubeadm.go:934] updating node { 192.168.39.161 8443 v1.24.4 crio true true} ...
	I0819 12:15:33.085848  143731 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-967295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-967295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:15:33.085935  143731 ssh_runner.go:195] Run: crio config
	I0819 12:15:33.133011  143731 cni.go:84] Creating CNI manager for ""
	I0819 12:15:33.133039  143731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:15:33.133052  143731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:15:33.133071  143731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.161 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-967295 NodeName:test-preload-967295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:15:33.133222  143731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-967295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:15:33.133306  143731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0819 12:15:33.143009  143731 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:15:33.143116  143731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:15:33.152633  143731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0819 12:15:33.169512  143731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:15:33.186589  143731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0819 12:15:33.203898  143731 ssh_runner.go:195] Run: grep 192.168.39.161	control-plane.minikube.internal$ /etc/hosts
	I0819 12:15:33.207912  143731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:15:33.219915  143731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:15:33.357558  143731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:15:33.375266  143731 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295 for IP: 192.168.39.161
	I0819 12:15:33.375295  143731 certs.go:194] generating shared ca certs ...
	I0819 12:15:33.375317  143731 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:15:33.375488  143731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:15:33.375534  143731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:15:33.375544  143731 certs.go:256] generating profile certs ...
	I0819 12:15:33.375661  143731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/client.key
	I0819 12:15:33.375776  143731 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/apiserver.key.30c60715
	I0819 12:15:33.375836  143731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/proxy-client.key
	I0819 12:15:33.375999  143731 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:15:33.376044  143731 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:15:33.376058  143731 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:15:33.376095  143731 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:15:33.376126  143731 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:15:33.376156  143731 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:15:33.376215  143731 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:15:33.377119  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:15:33.410614  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:15:33.451046  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:15:33.483501  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:15:33.518185  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 12:15:33.554105  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:15:33.587660  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:15:33.613052  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:15:33.638017  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:15:33.662284  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:15:33.687403  143731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:15:33.718507  143731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:15:33.736287  143731 ssh_runner.go:195] Run: openssl version
	I0819 12:15:33.742104  143731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:15:33.753111  143731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:15:33.758050  143731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:15:33.758135  143731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:15:33.763980  143731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:15:33.774973  143731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:15:33.786530  143731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:15:33.791234  143731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:15:33.791312  143731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:15:33.797222  143731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:15:33.808168  143731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:15:33.819112  143731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:15:33.823768  143731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:15:33.823849  143731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:15:33.830043  143731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:15:33.841270  143731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:15:33.847222  143731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:15:33.853772  143731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:15:33.860033  143731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:15:33.866231  143731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:15:33.872213  143731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:15:33.878236  143731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:15:33.884323  143731 kubeadm.go:392] StartCluster: {Name:test-preload-967295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-967295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:15:33.884412  143731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:15:33.884498  143731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:15:33.920112  143731 cri.go:89] found id: ""
	I0819 12:15:33.920182  143731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:15:33.930254  143731 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 12:15:33.930274  143731 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 12:15:33.930326  143731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 12:15:33.940229  143731 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:15:33.940700  143731 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-967295" does not appear in /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:15:33.940823  143731 kubeconfig.go:62] /home/jenkins/minikube-integration/19476-99410/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-967295" cluster setting kubeconfig missing "test-preload-967295" context setting]
	I0819 12:15:33.941200  143731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/kubeconfig: {Name:mk73914d2bd0db664ade6c952753a7dd30404784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:15:33.941810  143731 kapi.go:59] client config for test-preload-967295: &rest.Config{Host:"https://192.168.39.161:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/client.crt", KeyFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/client.key", CAFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 12:15:33.942515  143731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 12:15:33.952282  143731 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.161
	I0819 12:15:33.952319  143731 kubeadm.go:1160] stopping kube-system containers ...
	I0819 12:15:33.952332  143731 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 12:15:33.952398  143731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:15:33.986981  143731 cri.go:89] found id: ""
	I0819 12:15:33.987057  143731 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 12:15:34.004219  143731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 12:15:34.015559  143731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 12:15:34.015583  143731 kubeadm.go:157] found existing configuration files:
	
	I0819 12:15:34.015660  143731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 12:15:34.024617  143731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 12:15:34.024685  143731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 12:15:34.034201  143731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 12:15:34.043510  143731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 12:15:34.043597  143731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 12:15:34.053706  143731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 12:15:34.062977  143731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 12:15:34.063059  143731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 12:15:34.072878  143731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 12:15:34.082119  143731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 12:15:34.082188  143731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 12:15:34.091825  143731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 12:15:34.101741  143731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:15:34.188865  143731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:15:34.912587  143731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:15:35.166545  143731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:15:35.234507  143731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:15:35.319193  143731 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:15:35.319277  143731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:15:35.819375  143731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:15:36.319606  143731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:15:36.336069  143731 api_server.go:72] duration metric: took 1.016890993s to wait for apiserver process to appear ...
	I0819 12:15:36.336095  143731 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:15:36.336119  143731 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0819 12:15:36.336652  143731 api_server.go:269] stopped: https://192.168.39.161:8443/healthz: Get "https://192.168.39.161:8443/healthz": dial tcp 192.168.39.161:8443: connect: connection refused
	I0819 12:15:36.836892  143731 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0819 12:15:36.837527  143731 api_server.go:269] stopped: https://192.168.39.161:8443/healthz: Get "https://192.168.39.161:8443/healthz": dial tcp 192.168.39.161:8443: connect: connection refused
	I0819 12:15:37.336930  143731 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0819 12:15:40.606113  143731 api_server.go:279] https://192.168.39.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 12:15:40.606143  143731 api_server.go:103] status: https://192.168.39.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 12:15:40.606159  143731 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0819 12:15:40.654347  143731 api_server.go:279] https://192.168.39.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 12:15:40.654381  143731 api_server.go:103] status: https://192.168.39.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 12:15:40.836730  143731 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0819 12:15:40.850859  143731 api_server.go:279] https://192.168.39.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 12:15:40.850909  143731 api_server.go:103] status: https://192.168.39.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 12:15:41.336462  143731 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0819 12:15:41.346246  143731 api_server.go:279] https://192.168.39.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 12:15:41.346296  143731 api_server.go:103] status: https://192.168.39.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 12:15:41.836889  143731 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0819 12:15:41.842586  143731 api_server.go:279] https://192.168.39.161:8443/healthz returned 200:
	ok
	I0819 12:15:41.850971  143731 api_server.go:141] control plane version: v1.24.4
	I0819 12:15:41.851002  143731 api_server.go:131] duration metric: took 5.514899086s to wait for apiserver health ...
	I0819 12:15:41.851013  143731 cni.go:84] Creating CNI manager for ""
	I0819 12:15:41.851022  143731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:15:41.852634  143731 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 12:15:41.853811  143731 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 12:15:41.864482  143731 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 12:15:41.882472  143731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:15:41.882561  143731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 12:15:41.882576  143731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 12:15:41.892997  143731 system_pods.go:59] 7 kube-system pods found
	I0819 12:15:41.893040  143731 system_pods.go:61] "coredns-6d4b75cb6d-8pr44" [47659e16-00c4-4b79-bc76-5264959ff870] Running
	I0819 12:15:41.893047  143731 system_pods.go:61] "etcd-test-preload-967295" [e80f6297-f78a-455e-abc8-587773329b0d] Running
	I0819 12:15:41.893052  143731 system_pods.go:61] "kube-apiserver-test-preload-967295" [e35a8c1a-6f30-44ef-8c2b-f1ae73917829] Running
	I0819 12:15:41.893059  143731 system_pods.go:61] "kube-controller-manager-test-preload-967295" [60819d41-b82d-4e3d-a844-220b3fcbbff6] Running
	I0819 12:15:41.893069  143731 system_pods.go:61] "kube-proxy-ts7rh" [43510626-369e-4bb7-992f-b4d6f965fd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 12:15:41.893078  143731 system_pods.go:61] "kube-scheduler-test-preload-967295" [92d236c4-b41f-48ea-93de-e767a7b8e475] Running
	I0819 12:15:41.893089  143731 system_pods.go:61] "storage-provisioner" [5f068e4e-85fb-402c-a6f7-8690a1bf26fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 12:15:41.893096  143731 system_pods.go:74] duration metric: took 10.603657ms to wait for pod list to return data ...
	I0819 12:15:41.893107  143731 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:15:41.897391  143731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:15:41.897420  143731 node_conditions.go:123] node cpu capacity is 2
	I0819 12:15:41.897434  143731 node_conditions.go:105] duration metric: took 4.321035ms to run NodePressure ...
	I0819 12:15:41.897451  143731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 12:15:42.168350  143731 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 12:15:42.172401  143731 retry.go:31] will retry after 307.87702ms: kubelet not initialised
	I0819 12:15:42.496671  143731 retry.go:31] will retry after 452.856114ms: kubelet not initialised
	I0819 12:15:42.956481  143731 retry.go:31] will retry after 397.686807ms: kubelet not initialised
	I0819 12:15:43.360723  143731 retry.go:31] will retry after 729.204452ms: kubelet not initialised
	I0819 12:15:44.096899  143731 retry.go:31] will retry after 1.73241364s: kubelet not initialised
	I0819 12:15:45.834568  143731 kubeadm.go:739] kubelet initialised
	I0819 12:15:45.834590  143731 kubeadm.go:740] duration metric: took 3.666211652s waiting for restarted kubelet to initialise ...
	I0819 12:15:45.834599  143731 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:15:45.839262  143731 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8pr44" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:45.845449  143731 pod_ready.go:98] node "test-preload-967295" hosting pod "coredns-6d4b75cb6d-8pr44" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:45.845474  143731 pod_ready.go:82] duration metric: took 6.187116ms for pod "coredns-6d4b75cb6d-8pr44" in "kube-system" namespace to be "Ready" ...
	E0819 12:15:45.845483  143731 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-967295" hosting pod "coredns-6d4b75cb6d-8pr44" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:45.845490  143731 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:45.851107  143731 pod_ready.go:98] node "test-preload-967295" hosting pod "etcd-test-preload-967295" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:45.851133  143731 pod_ready.go:82] duration metric: took 5.634281ms for pod "etcd-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	E0819 12:15:45.851143  143731 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-967295" hosting pod "etcd-test-preload-967295" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:45.851150  143731 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:45.855634  143731 pod_ready.go:98] node "test-preload-967295" hosting pod "kube-apiserver-test-preload-967295" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:45.855660  143731 pod_ready.go:82] duration metric: took 4.503075ms for pod "kube-apiserver-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	E0819 12:15:45.855676  143731 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-967295" hosting pod "kube-apiserver-test-preload-967295" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:45.855683  143731 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:45.860245  143731 pod_ready.go:98] node "test-preload-967295" hosting pod "kube-controller-manager-test-preload-967295" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:45.860272  143731 pod_ready.go:82] duration metric: took 4.578478ms for pod "kube-controller-manager-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	E0819 12:15:45.860282  143731 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-967295" hosting pod "kube-controller-manager-test-preload-967295" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:45.860288  143731 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ts7rh" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:46.233758  143731 pod_ready.go:98] node "test-preload-967295" hosting pod "kube-proxy-ts7rh" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:46.233788  143731 pod_ready.go:82] duration metric: took 373.490281ms for pod "kube-proxy-ts7rh" in "kube-system" namespace to be "Ready" ...
	E0819 12:15:46.233798  143731 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-967295" hosting pod "kube-proxy-ts7rh" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:46.233804  143731 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:46.634177  143731 pod_ready.go:98] node "test-preload-967295" hosting pod "kube-scheduler-test-preload-967295" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:46.634208  143731 pod_ready.go:82] duration metric: took 400.397382ms for pod "kube-scheduler-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	E0819 12:15:46.634218  143731 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-967295" hosting pod "kube-scheduler-test-preload-967295" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:46.634225  143731 pod_ready.go:39] duration metric: took 799.613387ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:15:46.634242  143731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 12:15:46.645877  143731 ops.go:34] apiserver oom_adj: -16
	I0819 12:15:46.645902  143731 kubeadm.go:597] duration metric: took 12.715622087s to restartPrimaryControlPlane
	I0819 12:15:46.645914  143731 kubeadm.go:394] duration metric: took 12.761603486s to StartCluster
	I0819 12:15:46.645937  143731 settings.go:142] acquiring lock: {Name:mk5d5753fc545a0b5fdfa44a1e5cbc5d198d9dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:15:46.646019  143731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:15:46.646880  143731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/kubeconfig: {Name:mk73914d2bd0db664ade6c952753a7dd30404784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:15:46.647153  143731 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:15:46.647214  143731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 12:15:46.647299  143731 addons.go:69] Setting default-storageclass=true in profile "test-preload-967295"
	I0819 12:15:46.647294  143731 addons.go:69] Setting storage-provisioner=true in profile "test-preload-967295"
	I0819 12:15:46.647313  143731 config.go:182] Loaded profile config "test-preload-967295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0819 12:15:46.647337  143731 addons.go:234] Setting addon storage-provisioner=true in "test-preload-967295"
	I0819 12:15:46.647346  143731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-967295"
	W0819 12:15:46.647349  143731 addons.go:243] addon storage-provisioner should already be in state true
	I0819 12:15:46.647381  143731 host.go:66] Checking if "test-preload-967295" exists ...
	I0819 12:15:46.647788  143731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:15:46.647826  143731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:15:46.647840  143731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:15:46.647907  143731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:15:46.648857  143731 out.go:177] * Verifying Kubernetes components...
	I0819 12:15:46.650083  143731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:15:46.663138  143731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0819 12:15:46.663662  143731 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:15:46.664175  143731 main.go:141] libmachine: Using API Version  1
	I0819 12:15:46.664214  143731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:15:46.664573  143731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:15:46.665039  143731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:15:46.665069  143731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:15:46.667669  143731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I0819 12:15:46.668086  143731 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:15:46.668573  143731 main.go:141] libmachine: Using API Version  1
	I0819 12:15:46.668596  143731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:15:46.668927  143731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:15:46.669116  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetState
	I0819 12:15:46.671343  143731 kapi.go:59] client config for test-preload-967295: &rest.Config{Host:"https://192.168.39.161:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/client.crt", KeyFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/profiles/test-preload-967295/client.key", CAFile:"/home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 12:15:46.671628  143731 addons.go:234] Setting addon default-storageclass=true in "test-preload-967295"
	W0819 12:15:46.671645  143731 addons.go:243] addon default-storageclass should already be in state true
	I0819 12:15:46.671672  143731 host.go:66] Checking if "test-preload-967295" exists ...
	I0819 12:15:46.671974  143731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:15:46.672003  143731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:15:46.685684  143731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0819 12:15:46.686219  143731 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:15:46.686778  143731 main.go:141] libmachine: Using API Version  1
	I0819 12:15:46.686802  143731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:15:46.686823  143731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0819 12:15:46.687205  143731 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:15:46.687217  143731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:15:46.687435  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetState
	I0819 12:15:46.687627  143731 main.go:141] libmachine: Using API Version  1
	I0819 12:15:46.687649  143731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:15:46.687944  143731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:15:46.688536  143731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:15:46.688570  143731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:15:46.689488  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:15:46.691504  143731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:15:46.692807  143731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:15:46.692831  143731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 12:15:46.692855  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:46.696501  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:46.697013  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:46.697098  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:46.697252  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:46.697474  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:46.697660  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:46.697806  143731 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/test-preload-967295/id_rsa Username:docker}
	I0819 12:15:46.705161  143731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I0819 12:15:46.705677  143731 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:15:46.706319  143731 main.go:141] libmachine: Using API Version  1
	I0819 12:15:46.706347  143731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:15:46.706756  143731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:15:46.707000  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetState
	I0819 12:15:46.708792  143731 main.go:141] libmachine: (test-preload-967295) Calling .DriverName
	I0819 12:15:46.709022  143731 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 12:15:46.709037  143731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 12:15:46.709052  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHHostname
	I0819 12:15:46.712429  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:46.712871  143731 main.go:141] libmachine: (test-preload-967295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:de:40", ip: ""} in network mk-test-preload-967295: {Iface:virbr1 ExpiryTime:2024-08-19 13:15:10 +0000 UTC Type:0 Mac:52:54:00:2f:de:40 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-967295 Clientid:01:52:54:00:2f:de:40}
	I0819 12:15:46.712900  143731 main.go:141] libmachine: (test-preload-967295) DBG | domain test-preload-967295 has defined IP address 192.168.39.161 and MAC address 52:54:00:2f:de:40 in network mk-test-preload-967295
	I0819 12:15:46.713058  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHPort
	I0819 12:15:46.713262  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHKeyPath
	I0819 12:15:46.713429  143731 main.go:141] libmachine: (test-preload-967295) Calling .GetSSHUsername
	I0819 12:15:46.713550  143731 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/test-preload-967295/id_rsa Username:docker}
	I0819 12:15:46.826832  143731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:15:46.844076  143731 node_ready.go:35] waiting up to 6m0s for node "test-preload-967295" to be "Ready" ...
	I0819 12:15:46.902488  143731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:15:46.920260  143731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:15:47.853081  143731 main.go:141] libmachine: Making call to close driver server
	I0819 12:15:47.853113  143731 main.go:141] libmachine: (test-preload-967295) Calling .Close
	I0819 12:15:47.853189  143731 main.go:141] libmachine: Making call to close driver server
	I0819 12:15:47.853199  143731 main.go:141] libmachine: (test-preload-967295) Calling .Close
	I0819 12:15:47.853426  143731 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:15:47.853446  143731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:15:47.853456  143731 main.go:141] libmachine: Making call to close driver server
	I0819 12:15:47.853464  143731 main.go:141] libmachine: (test-preload-967295) Calling .Close
	I0819 12:15:47.853489  143731 main.go:141] libmachine: (test-preload-967295) DBG | Closing plugin on server side
	I0819 12:15:47.853502  143731 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:15:47.853517  143731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:15:47.853540  143731 main.go:141] libmachine: Making call to close driver server
	I0819 12:15:47.853548  143731 main.go:141] libmachine: (test-preload-967295) Calling .Close
	I0819 12:15:47.853743  143731 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:15:47.853761  143731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:15:47.853834  143731 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:15:47.853848  143731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:15:47.853858  143731 main.go:141] libmachine: (test-preload-967295) DBG | Closing plugin on server side
	I0819 12:15:47.860901  143731 main.go:141] libmachine: Making call to close driver server
	I0819 12:15:47.860924  143731 main.go:141] libmachine: (test-preload-967295) Calling .Close
	I0819 12:15:47.861211  143731 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:15:47.861230  143731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:15:47.861254  143731 main.go:141] libmachine: (test-preload-967295) DBG | Closing plugin on server side
	I0819 12:15:47.863748  143731 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 12:15:47.864842  143731 addons.go:510] duration metric: took 1.217631384s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 12:15:48.848737  143731 node_ready.go:53] node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:50.856127  143731 node_ready.go:53] node "test-preload-967295" has status "Ready":"False"
	I0819 12:15:51.348322  143731 node_ready.go:49] node "test-preload-967295" has status "Ready":"True"
	I0819 12:15:51.348349  143731 node_ready.go:38] duration metric: took 4.504237186s for node "test-preload-967295" to be "Ready" ...
	I0819 12:15:51.348361  143731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:15:51.353580  143731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-8pr44" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:51.358897  143731 pod_ready.go:93] pod "coredns-6d4b75cb6d-8pr44" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:51.358920  143731 pod_ready.go:82] duration metric: took 5.312954ms for pod "coredns-6d4b75cb6d-8pr44" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:51.358930  143731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:51.363635  143731 pod_ready.go:93] pod "etcd-test-preload-967295" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:51.363656  143731 pod_ready.go:82] duration metric: took 4.719343ms for pod "etcd-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:51.363664  143731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:53.371313  143731 pod_ready.go:103] pod "kube-apiserver-test-preload-967295" in "kube-system" namespace has status "Ready":"False"
	I0819 12:15:53.872222  143731 pod_ready.go:93] pod "kube-apiserver-test-preload-967295" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:53.872249  143731 pod_ready.go:82] duration metric: took 2.508578187s for pod "kube-apiserver-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:53.872260  143731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:53.877054  143731 pod_ready.go:93] pod "kube-controller-manager-test-preload-967295" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:53.877076  143731 pod_ready.go:82] duration metric: took 4.810363ms for pod "kube-controller-manager-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:53.877086  143731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ts7rh" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:53.881808  143731 pod_ready.go:93] pod "kube-proxy-ts7rh" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:53.881829  143731 pod_ready.go:82] duration metric: took 4.737181ms for pod "kube-proxy-ts7rh" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:53.881837  143731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:54.149130  143731 pod_ready.go:93] pod "kube-scheduler-test-preload-967295" in "kube-system" namespace has status "Ready":"True"
	I0819 12:15:54.149155  143731 pod_ready.go:82] duration metric: took 267.3113ms for pod "kube-scheduler-test-preload-967295" in "kube-system" namespace to be "Ready" ...
	I0819 12:15:54.149165  143731 pod_ready.go:39] duration metric: took 2.800793448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:15:54.149182  143731 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:15:54.149233  143731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:15:54.163148  143731 api_server.go:72] duration metric: took 7.515954417s to wait for apiserver process to appear ...
	I0819 12:15:54.163180  143731 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:15:54.163239  143731 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0819 12:15:54.168836  143731 api_server.go:279] https://192.168.39.161:8443/healthz returned 200:
	ok
	I0819 12:15:54.169741  143731 api_server.go:141] control plane version: v1.24.4
	I0819 12:15:54.169762  143731 api_server.go:131] duration metric: took 6.575049ms to wait for apiserver health ...
	I0819 12:15:54.169769  143731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:15:54.350901  143731 system_pods.go:59] 7 kube-system pods found
	I0819 12:15:54.350939  143731 system_pods.go:61] "coredns-6d4b75cb6d-8pr44" [47659e16-00c4-4b79-bc76-5264959ff870] Running
	I0819 12:15:54.350947  143731 system_pods.go:61] "etcd-test-preload-967295" [e80f6297-f78a-455e-abc8-587773329b0d] Running
	I0819 12:15:54.350953  143731 system_pods.go:61] "kube-apiserver-test-preload-967295" [e35a8c1a-6f30-44ef-8c2b-f1ae73917829] Running
	I0819 12:15:54.350960  143731 system_pods.go:61] "kube-controller-manager-test-preload-967295" [60819d41-b82d-4e3d-a844-220b3fcbbff6] Running
	I0819 12:15:54.350965  143731 system_pods.go:61] "kube-proxy-ts7rh" [43510626-369e-4bb7-992f-b4d6f965fd38] Running
	I0819 12:15:54.350970  143731 system_pods.go:61] "kube-scheduler-test-preload-967295" [92d236c4-b41f-48ea-93de-e767a7b8e475] Running
	I0819 12:15:54.350978  143731 system_pods.go:61] "storage-provisioner" [5f068e4e-85fb-402c-a6f7-8690a1bf26fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 12:15:54.350986  143731 system_pods.go:74] duration metric: took 181.210738ms to wait for pod list to return data ...
	I0819 12:15:54.351000  143731 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:15:54.548213  143731 default_sa.go:45] found service account: "default"
	I0819 12:15:54.548244  143731 default_sa.go:55] duration metric: took 197.236252ms for default service account to be created ...
	I0819 12:15:54.548256  143731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:15:54.750700  143731 system_pods.go:86] 7 kube-system pods found
	I0819 12:15:54.750733  143731 system_pods.go:89] "coredns-6d4b75cb6d-8pr44" [47659e16-00c4-4b79-bc76-5264959ff870] Running
	I0819 12:15:54.750743  143731 system_pods.go:89] "etcd-test-preload-967295" [e80f6297-f78a-455e-abc8-587773329b0d] Running
	I0819 12:15:54.750747  143731 system_pods.go:89] "kube-apiserver-test-preload-967295" [e35a8c1a-6f30-44ef-8c2b-f1ae73917829] Running
	I0819 12:15:54.750752  143731 system_pods.go:89] "kube-controller-manager-test-preload-967295" [60819d41-b82d-4e3d-a844-220b3fcbbff6] Running
	I0819 12:15:54.750755  143731 system_pods.go:89] "kube-proxy-ts7rh" [43510626-369e-4bb7-992f-b4d6f965fd38] Running
	I0819 12:15:54.750758  143731 system_pods.go:89] "kube-scheduler-test-preload-967295" [92d236c4-b41f-48ea-93de-e767a7b8e475] Running
	I0819 12:15:54.750767  143731 system_pods.go:89] "storage-provisioner" [5f068e4e-85fb-402c-a6f7-8690a1bf26fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 12:15:54.750774  143731 system_pods.go:126] duration metric: took 202.512341ms to wait for k8s-apps to be running ...
	I0819 12:15:54.750788  143731 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:15:54.750839  143731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:15:54.764606  143731 system_svc.go:56] duration metric: took 13.80326ms WaitForService to wait for kubelet
	I0819 12:15:54.764647  143731 kubeadm.go:582] duration metric: took 8.117458697s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:15:54.764673  143731 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:15:54.948494  143731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 12:15:54.948528  143731 node_conditions.go:123] node cpu capacity is 2
	I0819 12:15:54.948543  143731 node_conditions.go:105] duration metric: took 183.863491ms to run NodePressure ...
	I0819 12:15:54.948558  143731 start.go:241] waiting for startup goroutines ...
	I0819 12:15:54.948567  143731 start.go:246] waiting for cluster config update ...
	I0819 12:15:54.948580  143731 start.go:255] writing updated cluster config ...
	I0819 12:15:54.948901  143731 ssh_runner.go:195] Run: rm -f paused
	I0819 12:15:54.996590  143731 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0819 12:15:54.998637  143731 out.go:201] 
	W0819 12:15:54.999869  143731 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0819 12:15:55.001006  143731 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0819 12:15:55.002232  143731 out.go:177] * Done! kubectl is now configured to use "test-preload-967295" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.952671800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069755952648265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60bbf57c-d320-41ad-817a-01e6e0988b70 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.953159328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4a75666-e997-4160-a160-acd527c986bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.953235130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4a75666-e997-4160-a160-acd527c986bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.953417531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:533d5c3f875b08c3d2f5b951ed17a24846ddc83a9c462ec3a18f502c62c982b9,PodSandboxId:c81c6653786f43a1679711bd63377e6e25cd6b60e6d50a552a7cca7c75fe80b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069755390740282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f068e4e-85fb-402c-a6f7-8690a1bf26fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7f06660,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c88b6c0d83ffad4140967a072a324ced1c16d33fc4bf53197482ebb39ca0d6,PodSandboxId:8bf73ae5b0361e9d30da23ae78ebc841e36d49a16ac795e0746e33033e9e49c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724069749501784044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8pr44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47659e16-00c4-4b79-bc76-5264959ff870,},Annotations:map[string]string{io.kubernetes.container.hash: 2a1e4cc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6ac19259a09fe56645e28694681198b8cb998de3db1db1f94a819edef8bc87,PodSandboxId:c81c6653786f43a1679711bd63377e6e25cd6b60e6d50a552a7cca7c75fe80b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724069742433417334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 5f068e4e-85fb-402c-a6f7-8690a1bf26fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7f06660,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f38836de03e70659b442b362d4f728da12a3a7f4e0de062a95b480a337b0f96,PodSandboxId:36df6b32eae6ad7a22295e1331b523858d0485f91d6fbe831dc1c3f03ebcf4ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724069742288031449,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ts7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43510626-369e-4bb
7-992f-b4d6f965fd38,},Annotations:map[string]string{io.kubernetes.container.hash: fa05df2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab9556f3ecbe9e5ce07d3c55a6594be00cac1310f04463e2f359065b0938c92a,PodSandboxId:6b6eff2d0262ce824d482fbf3243d7f722aab4510b6ad1219154e75e0dd48617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724069736079775551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3604f78a11b28df4603a8a26
99a1d868,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2fe05e90edf626df4e44b045099c3917b4bd857514f3d4765252403488abc,PodSandboxId:1559eba066e4c0a11cabc08e31c39b45b6b2368144b91ec39e0ba1ddffadf387,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724069736020513095,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb974e5deb7c6abbc4683ca137171f63,},
Annotations:map[string]string{io.kubernetes.container.hash: cef15eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f9cbc425e716e3739b5202d6378302c2bb87e6e3d1981556dc379e713ba2d2,PodSandboxId:c6f6d353a02381baaf015c8e818f1f4c635471cce14e6a676dc12093da77ab26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724069736024301411,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56474a166ac4fb4fff32e556504fc56b,},Annotations:map[string]string{io.kubernet
es.container.hash: 53fc14e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6373810baa340bc66299f9c4b7d805f0fc6e2f2d542f7b35a820c0dedff165,PodSandboxId:f3ce95c9dcbcc9552b4c545307ee3ccbc4296b38337ae3bcd5853bc28efceeb0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724069736002560622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6cdc78cce48268d9bf18c8311928418,},Annotations:map[string]st
ring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4a75666-e997-4160-a160-acd527c986bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.989013185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4585aeaa-8029-4881-ae2a-772876064aea name=/runtime.v1.RuntimeService/Version
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.989097388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4585aeaa-8029-4881-ae2a-772876064aea name=/runtime.v1.RuntimeService/Version
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.990336381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b859cc4-7415-4e96-aad0-11616186737f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.990785416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069755990762570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b859cc4-7415-4e96-aad0-11616186737f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.991426309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71236eba-3eea-4f33-864d-ba49fadb4437 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.991476726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71236eba-3eea-4f33-864d-ba49fadb4437 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:55 test-preload-967295 crio[700]: time="2024-08-19 12:15:55.991676398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:533d5c3f875b08c3d2f5b951ed17a24846ddc83a9c462ec3a18f502c62c982b9,PodSandboxId:c81c6653786f43a1679711bd63377e6e25cd6b60e6d50a552a7cca7c75fe80b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069755390740282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f068e4e-85fb-402c-a6f7-8690a1bf26fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7f06660,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c88b6c0d83ffad4140967a072a324ced1c16d33fc4bf53197482ebb39ca0d6,PodSandboxId:8bf73ae5b0361e9d30da23ae78ebc841e36d49a16ac795e0746e33033e9e49c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724069749501784044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8pr44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47659e16-00c4-4b79-bc76-5264959ff870,},Annotations:map[string]string{io.kubernetes.container.hash: 2a1e4cc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6ac19259a09fe56645e28694681198b8cb998de3db1db1f94a819edef8bc87,PodSandboxId:c81c6653786f43a1679711bd63377e6e25cd6b60e6d50a552a7cca7c75fe80b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724069742433417334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 5f068e4e-85fb-402c-a6f7-8690a1bf26fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7f06660,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f38836de03e70659b442b362d4f728da12a3a7f4e0de062a95b480a337b0f96,PodSandboxId:36df6b32eae6ad7a22295e1331b523858d0485f91d6fbe831dc1c3f03ebcf4ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724069742288031449,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ts7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43510626-369e-4bb
7-992f-b4d6f965fd38,},Annotations:map[string]string{io.kubernetes.container.hash: fa05df2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab9556f3ecbe9e5ce07d3c55a6594be00cac1310f04463e2f359065b0938c92a,PodSandboxId:6b6eff2d0262ce824d482fbf3243d7f722aab4510b6ad1219154e75e0dd48617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724069736079775551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3604f78a11b28df4603a8a26
99a1d868,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2fe05e90edf626df4e44b045099c3917b4bd857514f3d4765252403488abc,PodSandboxId:1559eba066e4c0a11cabc08e31c39b45b6b2368144b91ec39e0ba1ddffadf387,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724069736020513095,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb974e5deb7c6abbc4683ca137171f63,},
Annotations:map[string]string{io.kubernetes.container.hash: cef15eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f9cbc425e716e3739b5202d6378302c2bb87e6e3d1981556dc379e713ba2d2,PodSandboxId:c6f6d353a02381baaf015c8e818f1f4c635471cce14e6a676dc12093da77ab26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724069736024301411,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56474a166ac4fb4fff32e556504fc56b,},Annotations:map[string]string{io.kubernet
es.container.hash: 53fc14e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6373810baa340bc66299f9c4b7d805f0fc6e2f2d542f7b35a820c0dedff165,PodSandboxId:f3ce95c9dcbcc9552b4c545307ee3ccbc4296b38337ae3bcd5853bc28efceeb0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724069736002560622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6cdc78cce48268d9bf18c8311928418,},Annotations:map[string]st
ring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71236eba-3eea-4f33-864d-ba49fadb4437 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.026931090Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bd60904-5d7d-4d96-a59d-f37727faa2b3 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.027021360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bd60904-5d7d-4d96-a59d-f37727faa2b3 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.028305293Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecba2955-288f-4b3b-bd55-19fb4df40449 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.029120861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069756029094476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecba2955-288f-4b3b-bd55-19fb4df40449 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.029700610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a142e878-510e-4f15-80f5-c2f6d413343b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.029781519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a142e878-510e-4f15-80f5-c2f6d413343b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.030003050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:533d5c3f875b08c3d2f5b951ed17a24846ddc83a9c462ec3a18f502c62c982b9,PodSandboxId:c81c6653786f43a1679711bd63377e6e25cd6b60e6d50a552a7cca7c75fe80b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069755390740282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f068e4e-85fb-402c-a6f7-8690a1bf26fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7f06660,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c88b6c0d83ffad4140967a072a324ced1c16d33fc4bf53197482ebb39ca0d6,PodSandboxId:8bf73ae5b0361e9d30da23ae78ebc841e36d49a16ac795e0746e33033e9e49c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724069749501784044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8pr44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47659e16-00c4-4b79-bc76-5264959ff870,},Annotations:map[string]string{io.kubernetes.container.hash: 2a1e4cc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6ac19259a09fe56645e28694681198b8cb998de3db1db1f94a819edef8bc87,PodSandboxId:c81c6653786f43a1679711bd63377e6e25cd6b60e6d50a552a7cca7c75fe80b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724069742433417334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 5f068e4e-85fb-402c-a6f7-8690a1bf26fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7f06660,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f38836de03e70659b442b362d4f728da12a3a7f4e0de062a95b480a337b0f96,PodSandboxId:36df6b32eae6ad7a22295e1331b523858d0485f91d6fbe831dc1c3f03ebcf4ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724069742288031449,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ts7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43510626-369e-4bb
7-992f-b4d6f965fd38,},Annotations:map[string]string{io.kubernetes.container.hash: fa05df2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab9556f3ecbe9e5ce07d3c55a6594be00cac1310f04463e2f359065b0938c92a,PodSandboxId:6b6eff2d0262ce824d482fbf3243d7f722aab4510b6ad1219154e75e0dd48617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724069736079775551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3604f78a11b28df4603a8a26
99a1d868,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2fe05e90edf626df4e44b045099c3917b4bd857514f3d4765252403488abc,PodSandboxId:1559eba066e4c0a11cabc08e31c39b45b6b2368144b91ec39e0ba1ddffadf387,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724069736020513095,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb974e5deb7c6abbc4683ca137171f63,},
Annotations:map[string]string{io.kubernetes.container.hash: cef15eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f9cbc425e716e3739b5202d6378302c2bb87e6e3d1981556dc379e713ba2d2,PodSandboxId:c6f6d353a02381baaf015c8e818f1f4c635471cce14e6a676dc12093da77ab26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724069736024301411,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56474a166ac4fb4fff32e556504fc56b,},Annotations:map[string]string{io.kubernet
es.container.hash: 53fc14e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6373810baa340bc66299f9c4b7d805f0fc6e2f2d542f7b35a820c0dedff165,PodSandboxId:f3ce95c9dcbcc9552b4c545307ee3ccbc4296b38337ae3bcd5853bc28efceeb0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724069736002560622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6cdc78cce48268d9bf18c8311928418,},Annotations:map[string]st
ring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a142e878-510e-4f15-80f5-c2f6d413343b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.066534963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd4b92ec-4ede-4b82-b173-31e95ee50beb name=/runtime.v1.RuntimeService/Version
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.066611589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd4b92ec-4ede-4b82-b173-31e95ee50beb name=/runtime.v1.RuntimeService/Version
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.068092908Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=669508e1-9881-4045-9836-a18f3237991b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.068507054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724069756068486328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=669508e1-9881-4045-9836-a18f3237991b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.069027312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06efea5d-8b6c-471b-9d63-2766787f1c95 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.069075652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06efea5d-8b6c-471b-9d63-2766787f1c95 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:15:56 test-preload-967295 crio[700]: time="2024-08-19 12:15:56.069248717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:533d5c3f875b08c3d2f5b951ed17a24846ddc83a9c462ec3a18f502c62c982b9,PodSandboxId:c81c6653786f43a1679711bd63377e6e25cd6b60e6d50a552a7cca7c75fe80b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724069755390740282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f068e4e-85fb-402c-a6f7-8690a1bf26fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7f06660,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c88b6c0d83ffad4140967a072a324ced1c16d33fc4bf53197482ebb39ca0d6,PodSandboxId:8bf73ae5b0361e9d30da23ae78ebc841e36d49a16ac795e0746e33033e9e49c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724069749501784044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8pr44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47659e16-00c4-4b79-bc76-5264959ff870,},Annotations:map[string]string{io.kubernetes.container.hash: 2a1e4cc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6ac19259a09fe56645e28694681198b8cb998de3db1db1f94a819edef8bc87,PodSandboxId:c81c6653786f43a1679711bd63377e6e25cd6b60e6d50a552a7cca7c75fe80b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724069742433417334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 5f068e4e-85fb-402c-a6f7-8690a1bf26fe,},Annotations:map[string]string{io.kubernetes.container.hash: 7f06660,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f38836de03e70659b442b362d4f728da12a3a7f4e0de062a95b480a337b0f96,PodSandboxId:36df6b32eae6ad7a22295e1331b523858d0485f91d6fbe831dc1c3f03ebcf4ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724069742288031449,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ts7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43510626-369e-4bb
7-992f-b4d6f965fd38,},Annotations:map[string]string{io.kubernetes.container.hash: fa05df2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab9556f3ecbe9e5ce07d3c55a6594be00cac1310f04463e2f359065b0938c92a,PodSandboxId:6b6eff2d0262ce824d482fbf3243d7f722aab4510b6ad1219154e75e0dd48617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724069736079775551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3604f78a11b28df4603a8a26
99a1d868,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2fe05e90edf626df4e44b045099c3917b4bd857514f3d4765252403488abc,PodSandboxId:1559eba066e4c0a11cabc08e31c39b45b6b2368144b91ec39e0ba1ddffadf387,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724069736020513095,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb974e5deb7c6abbc4683ca137171f63,},
Annotations:map[string]string{io.kubernetes.container.hash: cef15eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f9cbc425e716e3739b5202d6378302c2bb87e6e3d1981556dc379e713ba2d2,PodSandboxId:c6f6d353a02381baaf015c8e818f1f4c635471cce14e6a676dc12093da77ab26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724069736024301411,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56474a166ac4fb4fff32e556504fc56b,},Annotations:map[string]string{io.kubernet
es.container.hash: 53fc14e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6373810baa340bc66299f9c4b7d805f0fc6e2f2d542f7b35a820c0dedff165,PodSandboxId:f3ce95c9dcbcc9552b4c545307ee3ccbc4296b38337ae3bcd5853bc28efceeb0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724069736002560622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-967295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6cdc78cce48268d9bf18c8311928418,},Annotations:map[string]st
ring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06efea5d-8b6c-471b-9d63-2766787f1c95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	533d5c3f875b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   Less than a second ago   Running             storage-provisioner       3                   c81c6653786f4       storage-provisioner
	58c88b6c0d83f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago            Running             coredns                   1                   8bf73ae5b0361       coredns-6d4b75cb6d-8pr44
	6d6ac19259a09       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago           Exited              storage-provisioner       2                   c81c6653786f4       storage-provisioner
	7f38836de03e7       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago           Running             kube-proxy                1                   36df6b32eae6a       kube-proxy-ts7rh
	ab9556f3ecbe9       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago           Running             kube-scheduler            1                   6b6eff2d0262c       kube-scheduler-test-preload-967295
	03f9cbc425e71       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago           Running             etcd                      1                   c6f6d353a0238       etcd-test-preload-967295
	1bf2fe05e90ed       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago           Running             kube-apiserver            1                   1559eba066e4c       kube-apiserver-test-preload-967295
	ab6373810baa3       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago           Running             kube-controller-manager   1                   f3ce95c9dcbcc       kube-controller-manager-test-preload-967295
	
	
	==> coredns [58c88b6c0d83ffad4140967a072a324ced1c16d33fc4bf53197482ebb39ca0d6] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:35496 - 47060 "HINFO IN 7472087076426171768.1027779342739771112. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020689679s
	
	
	==> describe nodes <==
	Name:               test-preload-967295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-967295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=test-preload-967295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_14_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:14:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-967295
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:15:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:15:50 +0000   Mon, 19 Aug 2024 12:14:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:15:50 +0000   Mon, 19 Aug 2024 12:14:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:15:50 +0000   Mon, 19 Aug 2024 12:14:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:15:50 +0000   Mon, 19 Aug 2024 12:15:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    test-preload-967295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ecbd3309da564f5887f679279a89be88
	  System UUID:                ecbd3309-da56-4f58-87f6-79279a89be88
	  Boot ID:                    0211ec36-840f-4d69-9407-877652a1ddaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8pr44                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     75s
	  kube-system                 etcd-test-preload-967295                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         88s
	  kube-system                 kube-apiserver-test-preload-967295             250m (12%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-controller-manager-test-preload-967295    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-ts7rh                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-test-preload-967295             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node test-preload-967295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node test-preload-967295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                kubelet          Node test-preload-967295 status is now: NodeHasSufficientPID
	  Normal  NodeReady                78s                kubelet          Node test-preload-967295 status is now: NodeReady
	  Normal  RegisteredNode           76s                node-controller  Node test-preload-967295 event: Registered Node test-preload-967295 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-967295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-967295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-967295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-967295 event: Registered Node test-preload-967295 in Controller
	
	
	==> dmesg <==
	[Aug19 12:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048539] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036904] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.731496] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.884007] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.567641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.084530] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.055877] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053626] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.172085] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.136528] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.272681] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[ +12.920902] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.063718] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.744311] systemd-fstab-generator[1148]: Ignoring "noauto" option for root device
	[  +5.767414] kauditd_printk_skb: 105 callbacks suppressed
	[  +5.862454] systemd-fstab-generator[1831]: Ignoring "noauto" option for root device
	[  +0.096542] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.379476] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [03f9cbc425e716e3739b5202d6378302c2bb87e6e3d1981556dc379e713ba2d2] <==
	{"level":"info","ts":"2024-08-19T12:15:36.343Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"59d4e9d626571860","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-19T12:15:36.363Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:15:36.363Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"59d4e9d626571860","initial-advertise-peer-urls":["https://192.168.39.161:2380"],"listen-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.161:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T12:15:36.363Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T12:15:36.364Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-08-19T12:15:36.364Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-08-19T12:15:36.364Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T12:15:36.364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 switched to configuration voters=(6473055670413760608)"}
	{"level":"info","ts":"2024-08-19T12:15:36.364Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","added-peer-id":"59d4e9d626571860","added-peer-peer-urls":["https://192.168.39.161:2380"]}
	{"level":"info","ts":"2024-08-19T12:15:36.364Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:15:36.364Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:15:38.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T12:15:38.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:15:38.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgPreVoteResp from 59d4e9d626571860 at term 2"}
	{"level":"info","ts":"2024-08-19T12:15:38.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:15:38.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgVoteResp from 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-08-19T12:15:38.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:15:38.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 59d4e9d626571860 elected leader 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-08-19T12:15:38.214Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"59d4e9d626571860","local-member-attributes":"{Name:test-preload-967295 ClientURLs:[https://192.168.39.161:2379]}","request-path":"/0/members/59d4e9d626571860/attributes","cluster-id":"641f62d988bc06c1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:15:38.214Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:15:38.214Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:15:38.216Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:15:38.217Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.161:2379"}
	{"level":"info","ts":"2024-08-19T12:15:38.217Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:15:38.217Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:15:56 up 0 min,  0 users,  load average: 0.42, 0.12, 0.04
	Linux test-preload-967295 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1bf2fe05e90edf626df4e44b045099c3917b4bd857514f3d4765252403488abc] <==
	I0819 12:15:40.549454       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0819 12:15:40.549489       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0819 12:15:40.549544       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0819 12:15:40.549494       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0819 12:15:40.589274       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 12:15:40.603139       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0819 12:15:40.683315       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0819 12:15:40.741869       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:15:40.746292       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:15:40.746379       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0819 12:15:40.750253       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:15:40.756056       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0819 12:15:40.761639       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0819 12:15:40.771620       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0819 12:15:40.771670       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0819 12:15:41.246280       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 12:15:41.551161       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:15:42.043707       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0819 12:15:42.057078       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0819 12:15:42.110279       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0819 12:15:42.134091       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:15:42.153009       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:15:42.590785       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0819 12:15:53.152229       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 12:15:53.154776       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ab6373810baa340bc66299f9c4b7d805f0fc6e2f2d542f7b35a820c0dedff165] <==
	I0819 12:15:53.140708       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0819 12:15:53.141375       1 shared_informer.go:262] Caches are synced for PVC protection
	I0819 12:15:53.141544       1 shared_informer.go:262] Caches are synced for ephemeral
	I0819 12:15:53.145961       1 shared_informer.go:262] Caches are synced for daemon sets
	I0819 12:15:53.151910       1 shared_informer.go:262] Caches are synced for HPA
	I0819 12:15:53.153249       1 shared_informer.go:262] Caches are synced for stateful set
	I0819 12:15:53.186573       1 shared_informer.go:262] Caches are synced for persistent volume
	I0819 12:15:53.191007       1 shared_informer.go:262] Caches are synced for GC
	I0819 12:15:53.198357       1 shared_informer.go:262] Caches are synced for taint
	I0819 12:15:53.198510       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0819 12:15:53.198584       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0819 12:15:53.198663       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-967295. Assuming now as a timestamp.
	I0819 12:15:53.198712       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0819 12:15:53.198906       1 event.go:294] "Event occurred" object="test-preload-967295" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-967295 event: Registered Node test-preload-967295 in Controller"
	I0819 12:15:53.206442       1 shared_informer.go:262] Caches are synced for attach detach
	I0819 12:15:53.213913       1 shared_informer.go:262] Caches are synced for job
	I0819 12:15:53.219288       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 12:15:53.224694       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0819 12:15:53.232011       1 shared_informer.go:262] Caches are synced for disruption
	I0819 12:15:53.232108       1 disruption.go:371] Sending events to api server.
	I0819 12:15:53.258231       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 12:15:53.284124       1 shared_informer.go:262] Caches are synced for deployment
	I0819 12:15:53.675728       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 12:15:53.675857       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0819 12:15:53.698969       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [7f38836de03e70659b442b362d4f728da12a3a7f4e0de062a95b480a337b0f96] <==
	I0819 12:15:42.534631       1 node.go:163] Successfully retrieved node IP: 192.168.39.161
	I0819 12:15:42.534778       1 server_others.go:138] "Detected node IP" address="192.168.39.161"
	I0819 12:15:42.535220       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0819 12:15:42.581271       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0819 12:15:42.581290       1 server_others.go:206] "Using iptables Proxier"
	I0819 12:15:42.582094       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0819 12:15:42.582955       1 server.go:661] "Version info" version="v1.24.4"
	I0819 12:15:42.582999       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:15:42.584318       1 config.go:317] "Starting service config controller"
	I0819 12:15:42.584718       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0819 12:15:42.584947       1 config.go:226] "Starting endpoint slice config controller"
	I0819 12:15:42.584978       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0819 12:15:42.585936       1 config.go:444] "Starting node config controller"
	I0819 12:15:42.588399       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0819 12:15:42.685532       1 shared_informer.go:262] Caches are synced for service config
	I0819 12:15:42.685607       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0819 12:15:42.688706       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [ab9556f3ecbe9e5ce07d3c55a6594be00cac1310f04463e2f359065b0938c92a] <==
	I0819 12:15:36.588402       1 serving.go:348] Generated self-signed cert in-memory
	W0819 12:15:40.615507       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:15:40.615997       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:15:40.616035       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:15:40.616043       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:15:40.681028       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0819 12:15:40.681066       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:15:40.694044       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0819 12:15:40.695449       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:15:40.696157       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:15:40.695480       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0819 12:15:40.797995       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: I0819 12:15:41.457117    1155 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-672nz\" (UniqueName: \"kubernetes.io/projected/2355e358-8f44-471c-a7f1-7a03ffc7fce5-kube-api-access-672nz\") pod \"2355e358-8f44-471c-a7f1-7a03ffc7fce5\" (UID: \"2355e358-8f44-471c-a7f1-7a03ffc7fce5\") "
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: I0819 12:15:41.457431    1155 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2355e358-8f44-471c-a7f1-7a03ffc7fce5-config-volume\") pod \"2355e358-8f44-471c-a7f1-7a03ffc7fce5\" (UID: \"2355e358-8f44-471c-a7f1-7a03ffc7fce5\") "
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: E0819 12:15:41.458291    1155 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: E0819 12:15:41.458372    1155 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/47659e16-00c4-4b79-bc76-5264959ff870-config-volume podName:47659e16-00c4-4b79-bc76-5264959ff870 nodeName:}" failed. No retries permitted until 2024-08-19 12:15:41.958352613 +0000 UTC m=+6.801686661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/47659e16-00c4-4b79-bc76-5264959ff870-config-volume") pod "coredns-6d4b75cb6d-8pr44" (UID: "47659e16-00c4-4b79-bc76-5264959ff870") : object "kube-system"/"coredns" not registered
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: W0819 12:15:41.459259    1155 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/2355e358-8f44-471c-a7f1-7a03ffc7fce5/volumes/kubernetes.io~projected/kube-api-access-672nz: clearQuota called, but quotas disabled
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: W0819 12:15:41.459493    1155 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/2355e358-8f44-471c-a7f1-7a03ffc7fce5/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: I0819 12:15:41.459795    1155 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2355e358-8f44-471c-a7f1-7a03ffc7fce5-kube-api-access-672nz" (OuterVolumeSpecName: "kube-api-access-672nz") pod "2355e358-8f44-471c-a7f1-7a03ffc7fce5" (UID: "2355e358-8f44-471c-a7f1-7a03ffc7fce5"). InnerVolumeSpecName "kube-api-access-672nz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: I0819 12:15:41.460179    1155 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2355e358-8f44-471c-a7f1-7a03ffc7fce5-config-volume" (OuterVolumeSpecName: "config-volume") pod "2355e358-8f44-471c-a7f1-7a03ffc7fce5" (UID: "2355e358-8f44-471c-a7f1-7a03ffc7fce5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: I0819 12:15:41.558889    1155 reconciler.go:384] "Volume detached for volume \"kube-api-access-672nz\" (UniqueName: \"kubernetes.io/projected/2355e358-8f44-471c-a7f1-7a03ffc7fce5-kube-api-access-672nz\") on node \"test-preload-967295\" DevicePath \"\""
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: I0819 12:15:41.558966    1155 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2355e358-8f44-471c-a7f1-7a03ffc7fce5-config-volume\") on node \"test-preload-967295\" DevicePath \"\""
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: E0819 12:15:41.962079    1155 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 12:15:41 test-preload-967295 kubelet[1155]: E0819 12:15:41.962140    1155 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/47659e16-00c4-4b79-bc76-5264959ff870-config-volume podName:47659e16-00c4-4b79-bc76-5264959ff870 nodeName:}" failed. No retries permitted until 2024-08-19 12:15:42.962127235 +0000 UTC m=+7.805461262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/47659e16-00c4-4b79-bc76-5264959ff870-config-volume") pod "coredns-6d4b75cb6d-8pr44" (UID: "47659e16-00c4-4b79-bc76-5264959ff870") : object "kube-system"/"coredns" not registered
	Aug 19 12:15:42 test-preload-967295 kubelet[1155]: I0819 12:15:42.422392    1155 scope.go:110] "RemoveContainer" containerID="62dacd169b5e9ba655ec71d02d110f300c21965a59637d78086cc12f81a4ffa3"
	Aug 19 12:15:42 test-preload-967295 kubelet[1155]: E0819 12:15:42.968029    1155 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 12:15:42 test-preload-967295 kubelet[1155]: E0819 12:15:42.968135    1155 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/47659e16-00c4-4b79-bc76-5264959ff870-config-volume podName:47659e16-00c4-4b79-bc76-5264959ff870 nodeName:}" failed. No retries permitted until 2024-08-19 12:15:44.968111517 +0000 UTC m=+9.811445559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/47659e16-00c4-4b79-bc76-5264959ff870-config-volume") pod "coredns-6d4b75cb6d-8pr44" (UID: "47659e16-00c4-4b79-bc76-5264959ff870") : object "kube-system"/"coredns" not registered
	Aug 19 12:15:43 test-preload-967295 kubelet[1155]: E0819 12:15:43.381853    1155 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-8pr44" podUID=47659e16-00c4-4b79-bc76-5264959ff870
	Aug 19 12:15:43 test-preload-967295 kubelet[1155]: I0819 12:15:43.387500    1155 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2355e358-8f44-471c-a7f1-7a03ffc7fce5 path="/var/lib/kubelet/pods/2355e358-8f44-471c-a7f1-7a03ffc7fce5/volumes"
	Aug 19 12:15:43 test-preload-967295 kubelet[1155]: I0819 12:15:43.427570    1155 scope.go:110] "RemoveContainer" containerID="6d6ac19259a09fe56645e28694681198b8cb998de3db1db1f94a819edef8bc87"
	Aug 19 12:15:43 test-preload-967295 kubelet[1155]: I0819 12:15:43.427938    1155 scope.go:110] "RemoveContainer" containerID="62dacd169b5e9ba655ec71d02d110f300c21965a59637d78086cc12f81a4ffa3"
	Aug 19 12:15:43 test-preload-967295 kubelet[1155]: E0819 12:15:43.429023    1155 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5f068e4e-85fb-402c-a6f7-8690a1bf26fe)\"" pod="kube-system/storage-provisioner" podUID=5f068e4e-85fb-402c-a6f7-8690a1bf26fe
	Aug 19 12:15:44 test-preload-967295 kubelet[1155]: I0819 12:15:44.431722    1155 scope.go:110] "RemoveContainer" containerID="6d6ac19259a09fe56645e28694681198b8cb998de3db1db1f94a819edef8bc87"
	Aug 19 12:15:44 test-preload-967295 kubelet[1155]: E0819 12:15:44.431914    1155 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5f068e4e-85fb-402c-a6f7-8690a1bf26fe)\"" pod="kube-system/storage-provisioner" podUID=5f068e4e-85fb-402c-a6f7-8690a1bf26fe
	Aug 19 12:15:44 test-preload-967295 kubelet[1155]: E0819 12:15:44.982338    1155 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 19 12:15:44 test-preload-967295 kubelet[1155]: E0819 12:15:44.982445    1155 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/47659e16-00c4-4b79-bc76-5264959ff870-config-volume podName:47659e16-00c4-4b79-bc76-5264959ff870 nodeName:}" failed. No retries permitted until 2024-08-19 12:15:48.982427674 +0000 UTC m=+13.825761715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/47659e16-00c4-4b79-bc76-5264959ff870-config-volume") pod "coredns-6d4b75cb6d-8pr44" (UID: "47659e16-00c4-4b79-bc76-5264959ff870") : object "kube-system"/"coredns" not registered
	Aug 19 12:15:55 test-preload-967295 kubelet[1155]: I0819 12:15:55.382329    1155 scope.go:110] "RemoveContainer" containerID="6d6ac19259a09fe56645e28694681198b8cb998de3db1db1f94a819edef8bc87"
	
	
	==> storage-provisioner [533d5c3f875b08c3d2f5b951ed17a24846ddc83a9c462ec3a18f502c62c982b9] <==
	I0819 12:15:55.501192       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 12:15:55.531770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 12:15:55.532328       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [6d6ac19259a09fe56645e28694681198b8cb998de3db1db1f94a819edef8bc87] <==
	I0819 12:15:42.537424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 12:15:42.539753       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-967295 -n test-preload-967295
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-967295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-967295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-967295
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-967295: (1.120867881s)
--- FAIL: TestPreload (167.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (421.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m57.396563528s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-814177] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-814177" primary control-plane node in "kubernetes-upgrade-814177" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:21:18.270190  151102 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:21:18.270319  151102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:21:18.270328  151102 out.go:358] Setting ErrFile to fd 2...
	I0819 12:21:18.270333  151102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:21:18.270527  151102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:21:18.271106  151102 out.go:352] Setting JSON to false
	I0819 12:21:18.272097  151102 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7424,"bootTime":1724062654,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:21:18.272165  151102 start.go:139] virtualization: kvm guest
	I0819 12:21:18.274319  151102 out.go:177] * [kubernetes-upgrade-814177] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:21:18.275638  151102 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:21:18.275715  151102 notify.go:220] Checking for updates...
	I0819 12:21:18.278044  151102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:21:18.279539  151102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:21:18.281062  151102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:21:18.282401  151102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:21:18.283854  151102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:21:18.285501  151102 config.go:182] Loaded profile config "NoKubernetes-340370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0819 12:21:18.285600  151102 config.go:182] Loaded profile config "cert-expiration-497658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:21:18.285687  151102 config.go:182] Loaded profile config "pause-732494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:21:18.285765  151102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:21:18.324033  151102 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 12:21:18.325231  151102 start.go:297] selected driver: kvm2
	I0819 12:21:18.325253  151102 start.go:901] validating driver "kvm2" against <nil>
	I0819 12:21:18.325271  151102 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:21:18.326027  151102 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:18.326130  151102 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:21:18.343343  151102 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:21:18.343416  151102 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 12:21:18.343641  151102 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 12:21:18.343713  151102 cni.go:84] Creating CNI manager for ""
	I0819 12:21:18.343769  151102 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:21:18.343784  151102 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 12:21:18.343860  151102 start.go:340] cluster config:
	{Name:kubernetes-upgrade-814177 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-814177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:21:18.343972  151102 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:18.345784  151102 out.go:177] * Starting "kubernetes-upgrade-814177" primary control-plane node in "kubernetes-upgrade-814177" cluster
	I0819 12:21:18.346951  151102 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 12:21:18.346987  151102 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:21:18.347009  151102 cache.go:56] Caching tarball of preloaded images
	I0819 12:21:18.347090  151102 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:21:18.347102  151102 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 12:21:18.347211  151102 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/config.json ...
	I0819 12:21:18.347235  151102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/config.json: {Name:mka9fdada62122b790f58ab679687d18d33d0592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:21:18.347392  151102 start.go:360] acquireMachinesLock for kubernetes-upgrade-814177: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:21:47.052513  151102 start.go:364] duration metric: took 28.705054754s to acquireMachinesLock for "kubernetes-upgrade-814177"
	I0819 12:21:47.052587  151102 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-814177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-814177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:21:47.052696  151102 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 12:21:47.055630  151102 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:21:47.055867  151102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:21:47.055901  151102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:21:47.076415  151102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0819 12:21:47.076835  151102 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:21:47.077461  151102 main.go:141] libmachine: Using API Version  1
	I0819 12:21:47.077485  151102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:21:47.077914  151102 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:21:47.078146  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetMachineName
	I0819 12:21:47.078305  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:21:47.078505  151102 start.go:159] libmachine.API.Create for "kubernetes-upgrade-814177" (driver="kvm2")
	I0819 12:21:47.078533  151102 client.go:168] LocalClient.Create starting
	I0819 12:21:47.078574  151102 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 12:21:47.078625  151102 main.go:141] libmachine: Decoding PEM data...
	I0819 12:21:47.078650  151102 main.go:141] libmachine: Parsing certificate...
	I0819 12:21:47.078719  151102 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 12:21:47.078739  151102 main.go:141] libmachine: Decoding PEM data...
	I0819 12:21:47.078752  151102 main.go:141] libmachine: Parsing certificate...
	I0819 12:21:47.078769  151102 main.go:141] libmachine: Running pre-create checks...
	I0819 12:21:47.078783  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .PreCreateCheck
	I0819 12:21:47.079174  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetConfigRaw
	I0819 12:21:47.079616  151102 main.go:141] libmachine: Creating machine...
	I0819 12:21:47.079629  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .Create
	I0819 12:21:47.079791  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Creating KVM machine...
	I0819 12:21:47.081124  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found existing default KVM network
	I0819 12:21:47.082782  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:47.082602  151598 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:b4:c0} reservation:<nil>}
	I0819 12:21:47.084097  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:47.084005  151598 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000284aa0}
	I0819 12:21:47.084122  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | created network xml: 
	I0819 12:21:47.084135  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | <network>
	I0819 12:21:47.084148  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |   <name>mk-kubernetes-upgrade-814177</name>
	I0819 12:21:47.084172  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |   <dns enable='no'/>
	I0819 12:21:47.084183  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |   
	I0819 12:21:47.084198  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0819 12:21:47.084210  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |     <dhcp>
	I0819 12:21:47.084245  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0819 12:21:47.084273  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |     </dhcp>
	I0819 12:21:47.084286  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |   </ip>
	I0819 12:21:47.084297  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG |   
	I0819 12:21:47.084308  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | </network>
	I0819 12:21:47.084323  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | 
	I0819 12:21:47.089994  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | trying to create private KVM network mk-kubernetes-upgrade-814177 192.168.50.0/24...
	I0819 12:21:47.168498  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | private KVM network mk-kubernetes-upgrade-814177 192.168.50.0/24 created
	I0819 12:21:47.168537  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:47.168432  151598 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:21:47.168552  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177 ...
	I0819 12:21:47.168571  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 12:21:47.168595  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 12:21:47.420772  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:47.420643  151598 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa...
	I0819 12:21:47.474875  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:47.474723  151598 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/kubernetes-upgrade-814177.rawdisk...
	I0819 12:21:47.474905  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Writing magic tar header
	I0819 12:21:47.474918  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Writing SSH key tar header
	I0819 12:21:47.474930  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:47.474842  151598 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177 ...
	I0819 12:21:47.474946  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177
	I0819 12:21:47.475027  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 12:21:47.475055  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:21:47.475087  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177 (perms=drwx------)
	I0819 12:21:47.475104  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 12:21:47.475117  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:21:47.475135  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:21:47.475148  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:21:47.475168  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 12:21:47.475185  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Checking permissions on dir: /home
	I0819 12:21:47.475199  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 12:21:47.475216  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:21:47.475227  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:21:47.475236  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Creating domain...
	I0819 12:21:47.475247  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Skipping /home - not owner
	I0819 12:21:47.476374  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) define libvirt domain using xml: 
	I0819 12:21:47.476410  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) <domain type='kvm'>
	I0819 12:21:47.476432  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   <name>kubernetes-upgrade-814177</name>
	I0819 12:21:47.476451  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   <memory unit='MiB'>2200</memory>
	I0819 12:21:47.476472  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   <vcpu>2</vcpu>
	I0819 12:21:47.476479  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   <features>
	I0819 12:21:47.476486  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <acpi/>
	I0819 12:21:47.476492  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <apic/>
	I0819 12:21:47.476499  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <pae/>
	I0819 12:21:47.476507  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     
	I0819 12:21:47.476512  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   </features>
	I0819 12:21:47.476520  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   <cpu mode='host-passthrough'>
	I0819 12:21:47.476550  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   
	I0819 12:21:47.476576  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   </cpu>
	I0819 12:21:47.476586  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   <os>
	I0819 12:21:47.476598  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <type>hvm</type>
	I0819 12:21:47.476612  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <boot dev='cdrom'/>
	I0819 12:21:47.476623  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <boot dev='hd'/>
	I0819 12:21:47.476636  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <bootmenu enable='no'/>
	I0819 12:21:47.476646  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   </os>
	I0819 12:21:47.476664  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   <devices>
	I0819 12:21:47.476681  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <disk type='file' device='cdrom'>
	I0819 12:21:47.476700  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/boot2docker.iso'/>
	I0819 12:21:47.476720  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <target dev='hdc' bus='scsi'/>
	I0819 12:21:47.476741  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <readonly/>
	I0819 12:21:47.476775  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     </disk>
	I0819 12:21:47.476799  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <disk type='file' device='disk'>
	I0819 12:21:47.476813  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:21:47.476832  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/kubernetes-upgrade-814177.rawdisk'/>
	I0819 12:21:47.476850  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <target dev='hda' bus='virtio'/>
	I0819 12:21:47.476864  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     </disk>
	I0819 12:21:47.476873  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <interface type='network'>
	I0819 12:21:47.476887  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <source network='mk-kubernetes-upgrade-814177'/>
	I0819 12:21:47.476898  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <model type='virtio'/>
	I0819 12:21:47.476907  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     </interface>
	I0819 12:21:47.476922  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <interface type='network'>
	I0819 12:21:47.476935  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <source network='default'/>
	I0819 12:21:47.476946  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <model type='virtio'/>
	I0819 12:21:47.476956  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     </interface>
	I0819 12:21:47.476964  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <serial type='pty'>
	I0819 12:21:47.476977  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <target port='0'/>
	I0819 12:21:47.476987  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     </serial>
	I0819 12:21:47.477000  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <console type='pty'>
	I0819 12:21:47.477015  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <target type='serial' port='0'/>
	I0819 12:21:47.477030  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     </console>
	I0819 12:21:47.477044  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     <rng model='virtio'>
	I0819 12:21:47.477059  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)       <backend model='random'>/dev/random</backend>
	I0819 12:21:47.477071  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     </rng>
	I0819 12:21:47.477082  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     
	I0819 12:21:47.477092  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)     
	I0819 12:21:47.477101  151102 main.go:141] libmachine: (kubernetes-upgrade-814177)   </devices>
	I0819 12:21:47.477112  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) </domain>
	I0819 12:21:47.477124  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) 
	I0819 12:21:47.481612  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:7f:1f:18 in network default
	I0819 12:21:47.482229  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Ensuring networks are active...
	I0819 12:21:47.482256  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:47.482974  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Ensuring network default is active
	I0819 12:21:47.483321  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Ensuring network mk-kubernetes-upgrade-814177 is active
	I0819 12:21:47.483799  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Getting domain xml...
	I0819 12:21:47.484642  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Creating domain...
	I0819 12:21:48.824450  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Waiting to get IP...
	I0819 12:21:48.825449  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:48.825922  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:48.826001  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:48.825924  151598 retry.go:31] will retry after 210.06367ms: waiting for machine to come up
	I0819 12:21:49.037484  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:49.038035  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:49.038065  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:49.037998  151598 retry.go:31] will retry after 373.45489ms: waiting for machine to come up
	I0819 12:21:49.413792  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:49.414402  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:49.414434  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:49.414334  151598 retry.go:31] will retry after 433.082903ms: waiting for machine to come up
	I0819 12:21:49.848956  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:49.849511  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:49.849542  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:49.849450  151598 retry.go:31] will retry after 514.981071ms: waiting for machine to come up
	I0819 12:21:50.366110  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:50.366678  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:50.366706  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:50.366629  151598 retry.go:31] will retry after 538.45341ms: waiting for machine to come up
	I0819 12:21:50.906727  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:50.907223  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:50.907255  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:50.907151  151598 retry.go:31] will retry after 757.912242ms: waiting for machine to come up
	I0819 12:21:51.667299  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:51.667921  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:51.667944  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:51.667851  151598 retry.go:31] will retry after 918.595384ms: waiting for machine to come up
	I0819 12:21:52.588403  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:52.588757  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:52.588783  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:52.588728  151598 retry.go:31] will retry after 978.879249ms: waiting for machine to come up
	I0819 12:21:53.568752  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:53.569229  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:53.569255  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:53.569183  151598 retry.go:31] will retry after 1.642170572s: waiting for machine to come up
	I0819 12:21:55.214292  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:55.214799  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:55.214823  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:55.214754  151598 retry.go:31] will retry after 1.841096174s: waiting for machine to come up
	I0819 12:21:57.058981  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:57.059509  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:57.059586  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:57.059466  151598 retry.go:31] will retry after 2.52126677s: waiting for machine to come up
	I0819 12:21:59.582225  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:21:59.582741  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:21:59.582786  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:21:59.582698  151598 retry.go:31] will retry after 3.200783295s: waiting for machine to come up
	I0819 12:22:02.785100  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:02.785696  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:22:02.785737  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:22:02.785666  151598 retry.go:31] will retry after 3.151827505s: waiting for machine to come up
	I0819 12:22:05.940739  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:05.941335  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find current IP address of domain kubernetes-upgrade-814177 in network mk-kubernetes-upgrade-814177
	I0819 12:22:05.941370  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | I0819 12:22:05.941272  151598 retry.go:31] will retry after 3.425049388s: waiting for machine to come up
	I0819 12:22:09.368916  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.369399  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has current primary IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.369430  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Found IP for machine: 192.168.50.23
	I0819 12:22:09.369445  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Reserving static IP address...
	I0819 12:22:09.369859  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-814177", mac: "52:54:00:0c:b8:db", ip: "192.168.50.23"} in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.459479  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Getting to WaitForSSH function...
	I0819 12:22:09.459514  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Reserved static IP address: 192.168.50.23
	I0819 12:22:09.459530  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Waiting for SSH to be available...
	I0819 12:22:09.462561  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.463044  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:09.463079  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.463286  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Using SSH client type: external
	I0819 12:22:09.463320  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa (-rw-------)
	I0819 12:22:09.463355  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:22:09.463371  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | About to run SSH command:
	I0819 12:22:09.463389  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | exit 0
	I0819 12:22:09.592483  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | SSH cmd err, output: <nil>: 
	I0819 12:22:09.592756  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) KVM machine creation complete!
	I0819 12:22:09.593129  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetConfigRaw
	I0819 12:22:09.593871  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:22:09.594116  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:22:09.594324  151102 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 12:22:09.594344  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetState
	I0819 12:22:09.596068  151102 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 12:22:09.596082  151102 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 12:22:09.596087  151102 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 12:22:09.596095  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:09.599118  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.599593  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:09.599640  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.599798  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:09.600049  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:09.600252  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:09.600428  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:09.600636  151102 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:09.600899  151102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:22:09.600913  151102 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 12:22:09.711221  151102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:22:09.711250  151102 main.go:141] libmachine: Detecting the provisioner...
	I0819 12:22:09.711261  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:09.714726  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.715141  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:09.715189  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.715383  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:09.715615  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:09.715825  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:09.715977  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:09.716171  151102 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:09.716387  151102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:22:09.716399  151102 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 12:22:09.816362  151102 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 12:22:09.816443  151102 main.go:141] libmachine: found compatible host: buildroot
	I0819 12:22:09.816459  151102 main.go:141] libmachine: Provisioning with buildroot...
	I0819 12:22:09.816470  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetMachineName
	I0819 12:22:09.816749  151102 buildroot.go:166] provisioning hostname "kubernetes-upgrade-814177"
	I0819 12:22:09.816783  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetMachineName
	I0819 12:22:09.817024  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:09.820096  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.820549  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:09.820580  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.820740  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:09.820940  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:09.821123  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:09.821263  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:09.821442  151102 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:09.821639  151102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:22:09.821654  151102 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-814177 && echo "kubernetes-upgrade-814177" | sudo tee /etc/hostname
	I0819 12:22:09.939220  151102 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-814177
	
	I0819 12:22:09.939247  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:09.942321  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.942693  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:09.942728  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:09.943055  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:09.943242  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:09.943430  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:09.943569  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:09.943789  151102 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:09.944004  151102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:22:09.944034  151102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-814177' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-814177/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-814177' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:22:10.057307  151102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:22:10.057337  151102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:22:10.057385  151102 buildroot.go:174] setting up certificates
	I0819 12:22:10.057404  151102 provision.go:84] configureAuth start
	I0819 12:22:10.057427  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetMachineName
	I0819 12:22:10.057774  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetIP
	I0819 12:22:10.060477  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.060851  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:10.060880  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.061097  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:10.063009  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.063299  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:10.063329  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.063484  151102 provision.go:143] copyHostCerts
	I0819 12:22:10.063545  151102 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:22:10.063569  151102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:22:10.063626  151102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:22:10.063765  151102 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:22:10.063777  151102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:22:10.063809  151102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:22:10.063883  151102 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:22:10.063893  151102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:22:10.063920  151102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:22:10.063984  151102 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-814177 san=[127.0.0.1 192.168.50.23 kubernetes-upgrade-814177 localhost minikube]
	I0819 12:22:10.392485  151102 provision.go:177] copyRemoteCerts
	I0819 12:22:10.392550  151102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:22:10.392581  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:10.395292  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.395659  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:10.395691  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.395895  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:10.396111  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:10.396301  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:10.396463  151102 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa Username:docker}
	I0819 12:22:10.478264  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:22:10.502862  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:22:10.529363  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0819 12:22:10.555598  151102 provision.go:87] duration metric: took 498.176712ms to configureAuth
	I0819 12:22:10.555629  151102 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:22:10.555835  151102 config.go:182] Loaded profile config "kubernetes-upgrade-814177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 12:22:10.555926  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:10.558606  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.559055  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:10.559089  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.559299  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:10.559543  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:10.559751  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:10.559904  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:10.560082  151102 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:10.560248  151102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:22:10.560262  151102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:22:10.818617  151102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:22:10.818674  151102 main.go:141] libmachine: Checking connection to Docker...
	I0819 12:22:10.818689  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetURL
	I0819 12:22:10.820141  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | Using libvirt version 6000000
	I0819 12:22:10.822472  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.822767  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:10.822793  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.822996  151102 main.go:141] libmachine: Docker is up and running!
	I0819 12:22:10.823015  151102 main.go:141] libmachine: Reticulating splines...
	I0819 12:22:10.823022  151102 client.go:171] duration metric: took 23.744478284s to LocalClient.Create
	I0819 12:22:10.823048  151102 start.go:167] duration metric: took 23.744542242s to libmachine.API.Create "kubernetes-upgrade-814177"
	I0819 12:22:10.823058  151102 start.go:293] postStartSetup for "kubernetes-upgrade-814177" (driver="kvm2")
	I0819 12:22:10.823069  151102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:22:10.823086  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:22:10.823377  151102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:22:10.823413  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:10.825682  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.826061  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:10.826097  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.826209  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:10.826408  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:10.826581  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:10.826754  151102 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa Username:docker}
	I0819 12:22:10.906381  151102 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:22:10.910880  151102 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:22:10.910916  151102 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:22:10.910995  151102 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:22:10.911077  151102 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:22:10.911169  151102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:22:10.920234  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:22:10.945026  151102 start.go:296] duration metric: took 121.953116ms for postStartSetup
	I0819 12:22:10.945081  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetConfigRaw
	I0819 12:22:10.945701  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetIP
	I0819 12:22:10.948670  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.949036  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:10.949064  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.949284  151102 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/config.json ...
	I0819 12:22:10.949513  151102 start.go:128] duration metric: took 23.896805241s to createHost
	I0819 12:22:10.949547  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:10.951982  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.952380  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:10.952410  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:10.952551  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:10.952749  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:10.952955  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:10.953110  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:10.953292  151102 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:10.953517  151102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:22:10.953532  151102 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:22:11.056252  151102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724070131.031426095
	
	I0819 12:22:11.056279  151102 fix.go:216] guest clock: 1724070131.031426095
	I0819 12:22:11.056291  151102 fix.go:229] Guest: 2024-08-19 12:22:11.031426095 +0000 UTC Remote: 2024-08-19 12:22:10.949526663 +0000 UTC m=+52.715067922 (delta=81.899432ms)
	I0819 12:22:11.056317  151102 fix.go:200] guest clock delta is within tolerance: 81.899432ms
	I0819 12:22:11.056322  151102 start.go:83] releasing machines lock for "kubernetes-upgrade-814177", held for 24.00375765s
	I0819 12:22:11.056346  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:22:11.056648  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetIP
	I0819 12:22:11.059194  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:11.059566  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:11.059599  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:11.059767  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:22:11.060441  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:22:11.060655  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:22:11.060751  151102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:22:11.060801  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:11.060919  151102 ssh_runner.go:195] Run: cat /version.json
	I0819 12:22:11.060941  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:22:11.063508  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:11.063906  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:11.063938  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:11.063966  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:11.064153  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:11.064358  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:11.064495  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:11.064521  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:11.064560  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:11.064728  151102 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa Username:docker}
	I0819 12:22:11.065195  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:22:11.065351  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:22:11.065510  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:22:11.065646  151102 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa Username:docker}
	I0819 12:22:11.166774  151102 ssh_runner.go:195] Run: systemctl --version
	I0819 12:22:11.172595  151102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:22:11.329533  151102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:22:11.335019  151102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:22:11.335083  151102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:22:11.351833  151102 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:22:11.351862  151102 start.go:495] detecting cgroup driver to use...
	I0819 12:22:11.351938  151102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:22:11.370868  151102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:22:11.386591  151102 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:22:11.386663  151102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:22:11.402345  151102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:22:11.421284  151102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:22:11.545178  151102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:22:11.708968  151102 docker.go:233] disabling docker service ...
	I0819 12:22:11.709065  151102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:22:11.723316  151102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:22:11.739074  151102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:22:11.872518  151102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:22:12.001676  151102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:22:12.016429  151102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:22:12.036166  151102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 12:22:12.036245  151102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:12.047195  151102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:22:12.047274  151102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:12.057999  151102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:12.071086  151102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:12.081936  151102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:22:12.093310  151102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:22:12.104714  151102 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:22:12.104785  151102 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:22:12.119680  151102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:22:12.130907  151102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:22:12.262124  151102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:22:12.429212  151102 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:22:12.429291  151102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:22:12.434410  151102 start.go:563] Will wait 60s for crictl version
	I0819 12:22:12.434488  151102 ssh_runner.go:195] Run: which crictl
	I0819 12:22:12.438816  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:22:12.479499  151102 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:22:12.479593  151102 ssh_runner.go:195] Run: crio --version
	I0819 12:22:12.507398  151102 ssh_runner.go:195] Run: crio --version
	I0819 12:22:12.537563  151102 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 12:22:12.538805  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetIP
	I0819 12:22:12.542077  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:12.542466  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:22:01 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:22:12.542505  151102 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:22:12.542727  151102 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 12:22:12.547866  151102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:22:12.560343  151102 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-814177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-814177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:22:12.560462  151102 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 12:22:12.560508  151102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:22:12.593646  151102 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 12:22:12.593735  151102 ssh_runner.go:195] Run: which lz4
	I0819 12:22:12.598263  151102 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 12:22:12.603469  151102 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 12:22:12.603513  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 12:22:14.125723  151102 crio.go:462] duration metric: took 1.527503957s to copy over tarball
	I0819 12:22:14.125819  151102 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 12:22:16.812816  151102 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.686947099s)
	I0819 12:22:16.812850  151102 crio.go:469] duration metric: took 2.687090605s to extract the tarball
	I0819 12:22:16.812860  151102 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 12:22:16.856667  151102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:22:16.900001  151102 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 12:22:16.900029  151102 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 12:22:16.900100  151102 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:22:16.900111  151102 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 12:22:16.900126  151102 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 12:22:16.900140  151102 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 12:22:16.900164  151102 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 12:22:16.900182  151102 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 12:22:16.900183  151102 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 12:22:16.900198  151102 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 12:22:16.901954  151102 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 12:22:16.901966  151102 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 12:22:16.901982  151102 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 12:22:16.901989  151102 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 12:22:16.902007  151102 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:22:16.902038  151102 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 12:22:16.901960  151102 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 12:22:16.901964  151102 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 12:22:17.054157  151102 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 12:22:17.056118  151102 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 12:22:17.062798  151102 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 12:22:17.069931  151102 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 12:22:17.081127  151102 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 12:22:17.092449  151102 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 12:22:17.094201  151102 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 12:22:17.119521  151102 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 12:22:17.119570  151102 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 12:22:17.119618  151102 ssh_runner.go:195] Run: which crictl
	I0819 12:22:17.142604  151102 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 12:22:17.142654  151102 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 12:22:17.142708  151102 ssh_runner.go:195] Run: which crictl
	I0819 12:22:17.205284  151102 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 12:22:17.205330  151102 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 12:22:17.205343  151102 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 12:22:17.205379  151102 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 12:22:17.205401  151102 ssh_runner.go:195] Run: which crictl
	I0819 12:22:17.205426  151102 ssh_runner.go:195] Run: which crictl
	I0819 12:22:17.234085  151102 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 12:22:17.234127  151102 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 12:22:17.234163  151102 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 12:22:17.234197  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 12:22:17.234166  151102 ssh_runner.go:195] Run: which crictl
	I0819 12:22:17.234261  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 12:22:17.234199  151102 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 12:22:17.234293  151102 ssh_runner.go:195] Run: which crictl
	I0819 12:22:17.234082  151102 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 12:22:17.234323  151102 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 12:22:17.234344  151102 ssh_runner.go:195] Run: which crictl
	I0819 12:22:17.234355  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 12:22:17.234359  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 12:22:17.258074  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 12:22:17.258091  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 12:22:17.258114  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 12:22:17.383661  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 12:22:17.383677  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 12:22:17.383752  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 12:22:17.383770  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 12:22:17.385770  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 12:22:17.402161  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 12:22:17.418891  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 12:22:17.539362  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 12:22:17.539386  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 12:22:17.539449  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 12:22:17.539502  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 12:22:17.539534  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 12:22:17.545698  151102 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:22:17.569282  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 12:22:17.569383  151102 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 12:22:17.717663  151102 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 12:22:17.717752  151102 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 12:22:17.717805  151102 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 12:22:17.717838  151102 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 12:22:17.717875  151102 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 12:22:17.766655  151102 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 12:22:17.766720  151102 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 12:22:17.766797  151102 cache_images.go:92] duration metric: took 866.754487ms to LoadCachedImages
	W0819 12:22:17.766895  151102 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19476-99410/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0819 12:22:17.766914  151102 kubeadm.go:934] updating node { 192.168.50.23 8443 v1.20.0 crio true true} ...
	I0819 12:22:17.767044  151102 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-814177 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-814177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:22:17.767120  151102 ssh_runner.go:195] Run: crio config
	I0819 12:22:17.822363  151102 cni.go:84] Creating CNI manager for ""
	I0819 12:22:17.822389  151102 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:22:17.822402  151102 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:22:17.822427  151102 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.23 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-814177 NodeName:kubernetes-upgrade-814177 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 12:22:17.822629  151102 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-814177"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:22:17.822708  151102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 12:22:17.833226  151102 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:22:17.833314  151102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:22:17.843540  151102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0819 12:22:17.860970  151102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:22:17.877365  151102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 12:22:17.895228  151102 ssh_runner.go:195] Run: grep 192.168.50.23	control-plane.minikube.internal$ /etc/hosts
	I0819 12:22:17.899210  151102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:22:17.912278  151102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:22:18.049495  151102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:22:18.067416  151102 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177 for IP: 192.168.50.23
	I0819 12:22:18.067449  151102 certs.go:194] generating shared ca certs ...
	I0819 12:22:18.067471  151102 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:18.067652  151102 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:22:18.067712  151102 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:22:18.067744  151102 certs.go:256] generating profile certs ...
	I0819 12:22:18.067839  151102 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/client.key
	I0819 12:22:18.067859  151102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/client.crt with IP's: []
	I0819 12:22:18.211751  151102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/client.crt ...
	I0819 12:22:18.211790  151102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/client.crt: {Name:mk5ec40b64837da84a959b8479ca283c8ed635d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:18.212012  151102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/client.key ...
	I0819 12:22:18.212036  151102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/client.key: {Name:mk921dd00a7f52372269a47bc40d4389e921b6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:18.212127  151102 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.key.1b2c2bf2
	I0819 12:22:18.212148  151102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.crt.1b2c2bf2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.23]
	I0819 12:22:18.463406  151102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.crt.1b2c2bf2 ...
	I0819 12:22:18.463440  151102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.crt.1b2c2bf2: {Name:mkf5dcd313740f2f70c6182aba053e2e50978db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:18.463626  151102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.key.1b2c2bf2 ...
	I0819 12:22:18.463645  151102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.key.1b2c2bf2: {Name:mk3fa7c57270430dcde2e143c20e672b87576456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:18.463772  151102 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.crt.1b2c2bf2 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.crt
	I0819 12:22:18.463872  151102 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.key.1b2c2bf2 -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.key
	I0819 12:22:18.463950  151102 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.key
	I0819 12:22:18.463972  151102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.crt with IP's: []
	I0819 12:22:18.602959  151102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.crt ...
	I0819 12:22:18.602993  151102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.crt: {Name:mka08be60351a0cc7792e3e56a7e17b8b72f2008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:18.603171  151102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.key ...
	I0819 12:22:18.603187  151102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.key: {Name:mk4fa1d4d7727bb69c870ee2223ae610dae1550c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:18.603384  151102 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:22:18.603438  151102 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:22:18.603454  151102 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:22:18.603492  151102 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:22:18.603525  151102 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:22:18.603556  151102 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:22:18.603612  151102 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:22:18.604307  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:22:18.630307  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:22:18.655453  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:22:18.680058  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:22:18.709000  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 12:22:18.733445  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:22:18.757719  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:22:18.782196  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:22:18.805728  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:22:18.829441  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:22:18.854441  151102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:22:18.888330  151102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:22:18.908268  151102 ssh_runner.go:195] Run: openssl version
	I0819 12:22:18.916616  151102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:22:18.931923  151102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:18.940872  151102 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:18.940958  151102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:18.947103  151102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:22:18.962400  151102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:22:18.980688  151102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:22:18.986346  151102 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:22:18.986425  151102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:22:18.999214  151102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:22:19.011317  151102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:22:19.024060  151102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:22:19.028845  151102 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:22:19.028911  151102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:22:19.034887  151102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:22:19.046265  151102 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:22:19.050380  151102 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:22:19.050439  151102 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-814177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-814177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:22:19.050541  151102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:22:19.050604  151102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:22:19.088405  151102 cri.go:89] found id: ""
	I0819 12:22:19.088478  151102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:22:19.098437  151102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 12:22:19.110123  151102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 12:22:19.119991  151102 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 12:22:19.120015  151102 kubeadm.go:157] found existing configuration files:
	
	I0819 12:22:19.120081  151102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 12:22:19.129775  151102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 12:22:19.129848  151102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 12:22:19.139637  151102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 12:22:19.149067  151102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 12:22:19.149135  151102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 12:22:19.158766  151102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 12:22:19.168147  151102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 12:22:19.168220  151102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 12:22:19.177892  151102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 12:22:19.187487  151102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 12:22:19.187562  151102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 12:22:19.198758  151102 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 12:22:19.316167  151102 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 12:22:19.316253  151102 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 12:22:19.470031  151102 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 12:22:19.470215  151102 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 12:22:19.470360  151102 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 12:22:19.654514  151102 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 12:22:19.844012  151102 out.go:235]   - Generating certificates and keys ...
	I0819 12:22:19.844183  151102 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 12:22:19.844273  151102 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 12:22:19.844408  151102 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 12:22:20.186670  151102 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 12:22:20.277914  151102 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 12:22:20.549276  151102 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 12:22:20.822433  151102 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 12:22:20.822677  151102 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-814177 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	I0819 12:22:20.925215  151102 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 12:22:20.925477  151102 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-814177 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	I0819 12:22:21.100101  151102 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 12:22:21.659773  151102 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 12:22:21.878192  151102 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 12:22:21.878367  151102 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 12:22:22.012490  151102 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 12:22:22.267861  151102 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 12:22:22.331291  151102 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 12:22:22.499633  151102 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 12:22:22.517735  151102 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 12:22:22.519232  151102 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 12:22:22.519324  151102 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 12:22:22.661533  151102 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 12:22:22.663380  151102 out.go:235]   - Booting up control plane ...
	I0819 12:22:22.663509  151102 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 12:22:22.671596  151102 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 12:22:22.672674  151102 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 12:22:22.673703  151102 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 12:22:22.678402  151102 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 12:23:02.672356  151102 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 12:23:02.672823  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:23:02.673093  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:23:07.673639  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:23:07.673941  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:23:17.675364  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:23:17.675644  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:23:37.673033  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:23:37.673285  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:24:17.674573  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:24:17.674815  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:24:17.674824  151102 kubeadm.go:310] 
	I0819 12:24:17.674881  151102 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 12:24:17.674938  151102 kubeadm.go:310] 		timed out waiting for the condition
	I0819 12:24:17.674949  151102 kubeadm.go:310] 
	I0819 12:24:17.675000  151102 kubeadm.go:310] 	This error is likely caused by:
	I0819 12:24:17.675049  151102 kubeadm.go:310] 		- The kubelet is not running
	I0819 12:24:17.675229  151102 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 12:24:17.675259  151102 kubeadm.go:310] 
	I0819 12:24:17.675430  151102 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 12:24:17.675495  151102 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 12:24:17.675560  151102 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 12:24:17.675576  151102 kubeadm.go:310] 
	I0819 12:24:17.675737  151102 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 12:24:17.675845  151102 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 12:24:17.675856  151102 kubeadm.go:310] 
	I0819 12:24:17.675986  151102 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 12:24:17.676105  151102 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 12:24:17.676204  151102 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 12:24:17.676301  151102 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 12:24:17.676312  151102 kubeadm.go:310] 
	I0819 12:24:17.676636  151102 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 12:24:17.676745  151102 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 12:24:17.676837  151102 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 12:24:17.677029  151102 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-814177 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-814177 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-814177 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-814177 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 12:24:17.677096  151102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 12:24:18.726289  151102 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.049152343s)
	I0819 12:24:18.726417  151102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:24:18.742179  151102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 12:24:18.753696  151102 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 12:24:18.753725  151102 kubeadm.go:157] found existing configuration files:
	
	I0819 12:24:18.753792  151102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 12:24:18.763265  151102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 12:24:18.763327  151102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 12:24:18.773226  151102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 12:24:18.782781  151102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 12:24:18.782866  151102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 12:24:18.794444  151102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 12:24:18.807108  151102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 12:24:18.807170  151102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 12:24:18.820215  151102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 12:24:18.830930  151102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 12:24:18.831009  151102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 12:24:18.841085  151102 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 12:24:19.068976  151102 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 12:26:14.962751  151102 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 12:26:14.962871  151102 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 12:26:14.964819  151102 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 12:26:14.964899  151102 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 12:26:14.964997  151102 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 12:26:14.965121  151102 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 12:26:14.965248  151102 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 12:26:14.965340  151102 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 12:26:14.967811  151102 out.go:235]   - Generating certificates and keys ...
	I0819 12:26:14.967925  151102 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 12:26:14.968053  151102 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 12:26:14.968179  151102 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 12:26:14.968270  151102 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 12:26:14.968389  151102 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 12:26:14.968469  151102 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 12:26:14.968548  151102 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 12:26:14.968655  151102 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 12:26:14.968788  151102 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 12:26:14.968903  151102 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 12:26:14.968962  151102 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 12:26:14.969041  151102 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 12:26:14.969112  151102 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 12:26:14.969188  151102 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 12:26:14.969277  151102 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 12:26:14.969359  151102 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 12:26:14.969504  151102 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 12:26:14.969639  151102 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 12:26:14.969704  151102 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 12:26:14.969796  151102 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 12:26:14.971781  151102 out.go:235]   - Booting up control plane ...
	I0819 12:26:14.971914  151102 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 12:26:14.972030  151102 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 12:26:14.972124  151102 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 12:26:14.972239  151102 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 12:26:14.972486  151102 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 12:26:14.972571  151102 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 12:26:14.972657  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:26:14.972894  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:26:14.972987  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:26:14.973223  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:26:14.973314  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:26:14.973551  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:26:14.973635  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:26:14.973846  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:26:14.973928  151102 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 12:26:14.974158  151102 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 12:26:14.974172  151102 kubeadm.go:310] 
	I0819 12:26:14.974224  151102 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 12:26:14.974280  151102 kubeadm.go:310] 		timed out waiting for the condition
	I0819 12:26:14.974290  151102 kubeadm.go:310] 
	I0819 12:26:14.974335  151102 kubeadm.go:310] 	This error is likely caused by:
	I0819 12:26:14.974382  151102 kubeadm.go:310] 		- The kubelet is not running
	I0819 12:26:14.974516  151102 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 12:26:14.974535  151102 kubeadm.go:310] 
	I0819 12:26:14.974665  151102 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 12:26:14.974713  151102 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 12:26:14.974756  151102 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 12:26:14.974767  151102 kubeadm.go:310] 
	I0819 12:26:14.974899  151102 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 12:26:14.975009  151102 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 12:26:14.975020  151102 kubeadm.go:310] 
	I0819 12:26:14.975156  151102 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 12:26:14.975267  151102 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 12:26:14.975376  151102 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 12:26:14.975467  151102 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 12:26:14.975561  151102 kubeadm.go:394] duration metric: took 3m55.92512811s to StartCluster
	I0819 12:26:14.975612  151102 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:26:14.975691  151102 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:26:14.975805  151102 kubeadm.go:310] 
	I0819 12:26:15.023985  151102 cri.go:89] found id: ""
	I0819 12:26:15.024038  151102 logs.go:276] 0 containers: []
	W0819 12:26:15.024052  151102 logs.go:278] No container was found matching "kube-apiserver"
	I0819 12:26:15.024060  151102 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 12:26:15.024143  151102 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:26:15.066278  151102 cri.go:89] found id: ""
	I0819 12:26:15.066309  151102 logs.go:276] 0 containers: []
	W0819 12:26:15.066322  151102 logs.go:278] No container was found matching "etcd"
	I0819 12:26:15.066330  151102 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 12:26:15.066400  151102 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:26:15.103575  151102 cri.go:89] found id: ""
	I0819 12:26:15.103608  151102 logs.go:276] 0 containers: []
	W0819 12:26:15.103621  151102 logs.go:278] No container was found matching "coredns"
	I0819 12:26:15.103629  151102 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:26:15.103701  151102 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:26:15.141633  151102 cri.go:89] found id: ""
	I0819 12:26:15.141670  151102 logs.go:276] 0 containers: []
	W0819 12:26:15.141681  151102 logs.go:278] No container was found matching "kube-scheduler"
	I0819 12:26:15.141690  151102 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:26:15.141757  151102 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:26:15.179325  151102 cri.go:89] found id: ""
	I0819 12:26:15.179360  151102 logs.go:276] 0 containers: []
	W0819 12:26:15.179372  151102 logs.go:278] No container was found matching "kube-proxy"
	I0819 12:26:15.179390  151102 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:26:15.179474  151102 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:26:15.217407  151102 cri.go:89] found id: ""
	I0819 12:26:15.217446  151102 logs.go:276] 0 containers: []
	W0819 12:26:15.217457  151102 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 12:26:15.217465  151102 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 12:26:15.217538  151102 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:26:15.254373  151102 cri.go:89] found id: ""
	I0819 12:26:15.254401  151102 logs.go:276] 0 containers: []
	W0819 12:26:15.254412  151102 logs.go:278] No container was found matching "kindnet"
	I0819 12:26:15.254426  151102 logs.go:123] Gathering logs for kubelet ...
	I0819 12:26:15.254450  151102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 12:26:15.320165  151102 logs.go:123] Gathering logs for dmesg ...
	I0819 12:26:15.320209  151102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:26:15.334220  151102 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:26:15.334252  151102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 12:26:15.446283  151102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 12:26:15.446312  151102 logs.go:123] Gathering logs for CRI-O ...
	I0819 12:26:15.446329  151102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 12:26:15.569478  151102 logs.go:123] Gathering logs for container status ...
	I0819 12:26:15.569515  151102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 12:26:15.614969  151102 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 12:26:15.615042  151102 out.go:270] * 
	* 
	W0819 12:26:15.615096  151102 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 12:26:15.615120  151102 out.go:270] * 
	* 
	W0819 12:26:15.616059  151102 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 12:26:15.618845  151102 out.go:201] 
	W0819 12:26:15.619966  151102 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 12:26:15.620007  151102 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 12:26:15.620027  151102 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 12:26:15.621456  151102 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-814177
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-814177: (1.282454127s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-814177 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-814177 status --format={{.Host}}: exit status 7 (63.737666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.548335936s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-814177 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (83.36365ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-814177] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-814177
	    minikube start -p kubernetes-upgrade-814177 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8141772 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-814177 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-814177 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.330400689s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-19 12:28:15.04960803 +0000 UTC m=+6195.787181868
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-814177 -n kubernetes-upgrade-814177
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-814177 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-814177 logs -n 25: (1.869969985s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042 sudo cat                | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042 sudo cat                | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042 sudo cat                | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-787042                         | enable-default-cni-787042 | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC | 19 Aug 24 12:27 UTC |
	| start   | -p old-k8s-version-668313                            | old-k8s-version-668313    | jenkins | v1.33.1 | 19 Aug 24 12:27 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-787042 pgrep -a                           | flannel-787042            | jenkins | v1.33.1 | 19 Aug 24 12:28 UTC | 19 Aug 24 12:28 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-787042 pgrep -a                            | bridge-787042             | jenkins | v1.33.1 | 19 Aug 24 12:28 UTC | 19 Aug 24 12:28 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:27:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:27:27.448237  162691 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:27:27.448595  162691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:27:27.448606  162691 out.go:358] Setting ErrFile to fd 2...
	I0819 12:27:27.448613  162691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:27:27.448919  162691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:27:27.449682  162691 out.go:352] Setting JSON to false
	I0819 12:27:27.451161  162691 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7793,"bootTime":1724062654,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:27:27.451234  162691 start.go:139] virtualization: kvm guest
	I0819 12:27:27.453802  162691 out.go:177] * [old-k8s-version-668313] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:27:27.455386  162691 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:27:27.455488  162691 notify.go:220] Checking for updates...
	I0819 12:27:27.458077  162691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:27:27.459327  162691 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:27:27.460599  162691 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:27:27.462495  162691 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:27:27.463755  162691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:27:23.777576  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:23.778005  161013 main.go:141] libmachine: (bridge-787042) DBG | unable to find current IP address of domain bridge-787042 in network mk-bridge-787042
	I0819 12:27:23.778033  161013 main.go:141] libmachine: (bridge-787042) DBG | I0819 12:27:23.777962  161350 retry.go:31] will retry after 2.058008868s: waiting for machine to come up
	I0819 12:27:25.838269  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:25.838917  161013 main.go:141] libmachine: (bridge-787042) DBG | unable to find current IP address of domain bridge-787042 in network mk-bridge-787042
	I0819 12:27:25.838945  161013 main.go:141] libmachine: (bridge-787042) DBG | I0819 12:27:25.838820  161350 retry.go:31] will retry after 1.798699683s: waiting for machine to come up
	I0819 12:27:27.465589  162691 config.go:182] Loaded profile config "bridge-787042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:27:27.465746  162691 config.go:182] Loaded profile config "flannel-787042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:27:27.465873  162691 config.go:182] Loaded profile config "kubernetes-upgrade-814177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:27:27.465996  162691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:27:27.518702  162691 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 12:27:27.520389  162691 start.go:297] selected driver: kvm2
	I0819 12:27:27.520422  162691 start.go:901] validating driver "kvm2" against <nil>
	I0819 12:27:27.520443  162691 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:27:27.521741  162691 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:27:27.521829  162691 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:27:27.545065  162691 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:27:27.545138  162691 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 12:27:27.545400  162691 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:27:27.545479  162691 cni.go:84] Creating CNI manager for ""
	I0819 12:27:27.545494  162691 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:27:27.545504  162691 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 12:27:27.545581  162691 start.go:340] cluster config:
	{Name:old-k8s-version-668313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-668313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:27:27.545709  162691 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:27:27.547459  162691 out.go:177] * Starting "old-k8s-version-668313" primary control-plane node in "old-k8s-version-668313" cluster
	I0819 12:27:26.705659  159489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000880219s
	I0819 12:27:26.705742  159489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 12:27:31.208408  159489 kubeadm.go:310] [api-check] The API server is healthy after 4.502196656s
	I0819 12:27:31.223885  159489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 12:27:31.240017  159489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 12:27:31.270249  159489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 12:27:31.270546  159489 kubeadm.go:310] [mark-control-plane] Marking the node flannel-787042 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 12:27:31.281375  159489 kubeadm.go:310] [bootstrap-token] Using token: 8hwu8c.pod0run8qezhllxc
	I0819 12:27:31.282816  159489 out.go:235]   - Configuring RBAC rules ...
	I0819 12:27:31.282964  159489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 12:27:31.289215  159489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 12:27:31.302070  159489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 12:27:31.309073  159489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 12:27:27.548619  162691 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 12:27:27.548675  162691 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:27:27.548686  162691 cache.go:56] Caching tarball of preloaded images
	I0819 12:27:27.548786  162691 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:27:27.548800  162691 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 12:27:27.548926  162691 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/old-k8s-version-668313/config.json ...
	I0819 12:27:27.548950  162691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/old-k8s-version-668313/config.json: {Name:mk1c0511e30eca997635cd9b31047c639a8844f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:27.549120  162691 start.go:360] acquireMachinesLock for old-k8s-version-668313: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:27:27.640247  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:27.640914  161013 main.go:141] libmachine: (bridge-787042) DBG | unable to find current IP address of domain bridge-787042 in network mk-bridge-787042
	I0819 12:27:27.640939  161013 main.go:141] libmachine: (bridge-787042) DBG | I0819 12:27:27.640827  161350 retry.go:31] will retry after 2.189754433s: waiting for machine to come up
	I0819 12:27:29.833097  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:29.833636  161013 main.go:141] libmachine: (bridge-787042) DBG | unable to find current IP address of domain bridge-787042 in network mk-bridge-787042
	I0819 12:27:29.833664  161013 main.go:141] libmachine: (bridge-787042) DBG | I0819 12:27:29.833583  161350 retry.go:31] will retry after 3.123490019s: waiting for machine to come up
	I0819 12:27:31.315684  159489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 12:27:31.320244  159489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 12:27:31.618677  159489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 12:27:32.046573  159489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 12:27:32.619532  159489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 12:27:32.619666  159489 kubeadm.go:310] 
	I0819 12:27:32.619782  159489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 12:27:32.619793  159489 kubeadm.go:310] 
	I0819 12:27:32.619892  159489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 12:27:32.619903  159489 kubeadm.go:310] 
	I0819 12:27:32.619948  159489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 12:27:32.620045  159489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 12:27:32.620092  159489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 12:27:32.620100  159489 kubeadm.go:310] 
	I0819 12:27:32.620155  159489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 12:27:32.620162  159489 kubeadm.go:310] 
	I0819 12:27:32.620217  159489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 12:27:32.620228  159489 kubeadm.go:310] 
	I0819 12:27:32.620289  159489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 12:27:32.620383  159489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 12:27:32.620479  159489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 12:27:32.620489  159489 kubeadm.go:310] 
	I0819 12:27:32.620621  159489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 12:27:32.620730  159489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 12:27:32.620737  159489 kubeadm.go:310] 
	I0819 12:27:32.620859  159489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8hwu8c.pod0run8qezhllxc \
	I0819 12:27:32.621011  159489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 \
	I0819 12:27:32.621047  159489 kubeadm.go:310] 	--control-plane 
	I0819 12:27:32.621054  159489 kubeadm.go:310] 
	I0819 12:27:32.621169  159489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 12:27:32.621190  159489 kubeadm.go:310] 
	I0819 12:27:32.621329  159489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8hwu8c.pod0run8qezhllxc \
	I0819 12:27:32.621469  159489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9a531b4470c09d8f50eed8894aa9b341ca16ed7810e6d53dbdc0d1c4cff0af52 
	I0819 12:27:32.622268  159489 kubeadm.go:310] W0819 12:27:22.393207     856 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:27:32.622640  159489 kubeadm.go:310] W0819 12:27:22.393873     856 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:27:32.622764  159489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 12:27:32.622794  159489 cni.go:84] Creating CNI manager for "flannel"
	I0819 12:27:32.624270  159489 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0819 12:27:32.625422  159489 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 12:27:32.631397  159489 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 12:27:32.631426  159489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0819 12:27:32.655747  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 12:27:33.054691  159489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 12:27:33.054752  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:33.054771  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-787042 minikube.k8s.io/updated_at=2024_08_19T12_27_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=flannel-787042 minikube.k8s.io/primary=true
	I0819 12:27:33.075366  159489 ops.go:34] apiserver oom_adj: -16
	I0819 12:27:33.209684  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:33.710406  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:34.210117  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:34.710385  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:35.210667  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:35.709680  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:36.210382  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:36.710122  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:37.210244  159489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:27:37.292230  159489 kubeadm.go:1113] duration metric: took 4.237538003s to wait for elevateKubeSystemPrivileges
	I0819 12:27:37.292272  159489 kubeadm.go:394] duration metric: took 15.116384783s to StartCluster
	I0819 12:27:37.292298  159489 settings.go:142] acquiring lock: {Name:mk5d5753fc545a0b5fdfa44a1e5cbc5d198d9dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:37.292394  159489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:27:37.293307  159489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/kubeconfig: {Name:mk73914d2bd0db664ade6c952753a7dd30404784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:37.293525  159489 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 12:27:37.293539  159489 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.196 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:27:37.293640  159489 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 12:27:37.293716  159489 addons.go:69] Setting storage-provisioner=true in profile "flannel-787042"
	I0819 12:27:37.293753  159489 addons.go:234] Setting addon storage-provisioner=true in "flannel-787042"
	I0819 12:27:37.293752  159489 addons.go:69] Setting default-storageclass=true in profile "flannel-787042"
	I0819 12:27:37.293762  159489 config.go:182] Loaded profile config "flannel-787042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:27:37.293801  159489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-787042"
	I0819 12:27:37.293808  159489 host.go:66] Checking if "flannel-787042" exists ...
	I0819 12:27:37.294258  159489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:27:37.294273  159489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:27:37.294291  159489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:27:37.294311  159489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:27:37.295170  159489 out.go:177] * Verifying Kubernetes components...
	I0819 12:27:37.296671  159489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:27:37.310093  159489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0819 12:27:37.310206  159489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I0819 12:27:37.310648  159489 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:27:37.310782  159489 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:27:37.311195  159489 main.go:141] libmachine: Using API Version  1
	I0819 12:27:37.311212  159489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:27:37.311323  159489 main.go:141] libmachine: Using API Version  1
	I0819 12:27:37.311345  159489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:27:37.311700  159489 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:27:37.311705  159489 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:27:37.311921  159489 main.go:141] libmachine: (flannel-787042) Calling .GetState
	I0819 12:27:37.312277  159489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:27:37.312320  159489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:27:37.315508  159489 addons.go:234] Setting addon default-storageclass=true in "flannel-787042"
	I0819 12:27:37.315554  159489 host.go:66] Checking if "flannel-787042" exists ...
	I0819 12:27:37.315966  159489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:27:37.316011  159489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:27:37.330604  159489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
	I0819 12:27:37.331133  159489 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:27:37.331640  159489 main.go:141] libmachine: Using API Version  1
	I0819 12:27:37.331663  159489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:27:37.331775  159489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0819 12:27:37.332024  159489 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:27:37.332165  159489 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:27:37.332258  159489 main.go:141] libmachine: (flannel-787042) Calling .GetState
	I0819 12:27:37.332654  159489 main.go:141] libmachine: Using API Version  1
	I0819 12:27:37.332677  159489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:27:37.332994  159489 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:27:37.333569  159489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:27:37.333651  159489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:27:37.334327  159489 main.go:141] libmachine: (flannel-787042) Calling .DriverName
	I0819 12:27:37.336390  159489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:27:32.960895  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:32.961511  161013 main.go:141] libmachine: (bridge-787042) DBG | unable to find current IP address of domain bridge-787042 in network mk-bridge-787042
	I0819 12:27:32.961540  161013 main.go:141] libmachine: (bridge-787042) DBG | I0819 12:27:32.961464  161350 retry.go:31] will retry after 3.546723561s: waiting for machine to come up
	I0819 12:27:36.511108  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.511800  161013 main.go:141] libmachine: (bridge-787042) Found IP for machine: 192.168.39.222
	I0819 12:27:36.511830  161013 main.go:141] libmachine: (bridge-787042) Reserving static IP address...
	I0819 12:27:36.511849  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has current primary IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.512222  161013 main.go:141] libmachine: (bridge-787042) DBG | unable to find host DHCP lease matching {name: "bridge-787042", mac: "52:54:00:88:36:68", ip: "192.168.39.222"} in network mk-bridge-787042
	I0819 12:27:36.603434  161013 main.go:141] libmachine: (bridge-787042) DBG | Getting to WaitForSSH function...
	I0819 12:27:36.603465  161013 main.go:141] libmachine: (bridge-787042) Reserved static IP address: 192.168.39.222
	I0819 12:27:36.603512  161013 main.go:141] libmachine: (bridge-787042) Waiting for SSH to be available...
	I0819 12:27:36.606405  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.606840  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:36:68}
	I0819 12:27:36.606869  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.607078  161013 main.go:141] libmachine: (bridge-787042) DBG | Using SSH client type: external
	I0819 12:27:36.607108  161013 main.go:141] libmachine: (bridge-787042) DBG | Using SSH private key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/bridge-787042/id_rsa (-rw-------)
	I0819 12:27:36.607185  161013 main.go:141] libmachine: (bridge-787042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19476-99410/.minikube/machines/bridge-787042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 12:27:36.607200  161013 main.go:141] libmachine: (bridge-787042) DBG | About to run SSH command:
	I0819 12:27:36.607212  161013 main.go:141] libmachine: (bridge-787042) DBG | exit 0
	I0819 12:27:36.744813  161013 main.go:141] libmachine: (bridge-787042) DBG | SSH cmd err, output: <nil>: 
	I0819 12:27:36.745120  161013 main.go:141] libmachine: (bridge-787042) KVM machine creation complete!
	I0819 12:27:36.745492  161013 main.go:141] libmachine: (bridge-787042) Calling .GetConfigRaw
	I0819 12:27:36.746102  161013 main.go:141] libmachine: (bridge-787042) Calling .DriverName
	I0819 12:27:36.746331  161013 main.go:141] libmachine: (bridge-787042) Calling .DriverName
	I0819 12:27:36.746512  161013 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 12:27:36.746529  161013 main.go:141] libmachine: (bridge-787042) Calling .GetState
	I0819 12:27:36.747957  161013 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 12:27:36.747975  161013 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 12:27:36.747983  161013 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 12:27:36.747991  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:36.750754  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.751199  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:36.751227  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.751441  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:36.751663  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:36.751852  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:36.752028  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:36.752230  161013 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:36.752488  161013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0819 12:27:36.752508  161013 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 12:27:36.871027  161013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:27:36.871055  161013 main.go:141] libmachine: Detecting the provisioner...
	I0819 12:27:36.871066  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:36.873886  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.874274  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:36.874299  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.874566  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:36.874776  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:36.874973  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:36.875159  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:36.875334  161013 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:36.875501  161013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0819 12:27:36.875511  161013 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 12:27:36.992164  161013 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 12:27:36.992241  161013 main.go:141] libmachine: found compatible host: buildroot
	I0819 12:27:36.992251  161013 main.go:141] libmachine: Provisioning with buildroot...
	I0819 12:27:36.992262  161013 main.go:141] libmachine: (bridge-787042) Calling .GetMachineName
	I0819 12:27:36.992528  161013 buildroot.go:166] provisioning hostname "bridge-787042"
	I0819 12:27:36.992559  161013 main.go:141] libmachine: (bridge-787042) Calling .GetMachineName
	I0819 12:27:36.992717  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:36.995348  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.995774  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:36.995797  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:36.995922  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:36.996126  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:36.996311  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:36.996471  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:36.996650  161013 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:36.996827  161013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0819 12:27:36.996838  161013 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-787042 && echo "bridge-787042" | sudo tee /etc/hostname
	I0819 12:27:37.125685  161013 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-787042
	
	I0819 12:27:37.125713  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:37.128288  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.128660  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:37.128692  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.128844  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:37.129048  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:37.129249  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:37.129415  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:37.129612  161013 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:37.129789  161013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0819 12:27:37.129815  161013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-787042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-787042/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-787042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:27:37.256659  161013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:27:37.256699  161013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:27:37.256771  161013 buildroot.go:174] setting up certificates
	I0819 12:27:37.256788  161013 provision.go:84] configureAuth start
	I0819 12:27:37.256804  161013 main.go:141] libmachine: (bridge-787042) Calling .GetMachineName
	I0819 12:27:37.257165  161013 main.go:141] libmachine: (bridge-787042) Calling .GetIP
	I0819 12:27:37.259858  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.260271  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:37.260300  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.260470  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:37.262977  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.263276  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:37.263303  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.263451  161013 provision.go:143] copyHostCerts
	I0819 12:27:37.263516  161013 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:27:37.263539  161013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:27:37.263621  161013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:27:37.263758  161013 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:27:37.263770  161013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:27:37.263803  161013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:27:37.263888  161013 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:27:37.263899  161013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:27:37.263927  161013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:27:37.263995  161013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.bridge-787042 san=[127.0.0.1 192.168.39.222 bridge-787042 localhost minikube]
	I0819 12:27:38.216320  161206 start.go:364] duration metric: took 34.344287657s to acquireMachinesLock for "kubernetes-upgrade-814177"
	I0819 12:27:38.216386  161206 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:27:38.216399  161206 fix.go:54] fixHost starting: 
	I0819 12:27:38.216783  161206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:27:38.216828  161206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:27:38.235823  161206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34667
	I0819 12:27:38.236301  161206 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:27:38.236867  161206 main.go:141] libmachine: Using API Version  1
	I0819 12:27:38.236894  161206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:27:38.237264  161206 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:27:38.237483  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:27:38.237641  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetState
	I0819 12:27:38.239438  161206 fix.go:112] recreateIfNeeded on kubernetes-upgrade-814177: state=Running err=<nil>
	W0819 12:27:38.239464  161206 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:27:38.241201  161206 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-814177" VM ...
	I0819 12:27:37.337702  159489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:27:37.337719  159489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 12:27:37.337736  159489 main.go:141] libmachine: (flannel-787042) Calling .GetSSHHostname
	I0819 12:27:37.341088  159489 main.go:141] libmachine: (flannel-787042) DBG | domain flannel-787042 has defined MAC address 52:54:00:34:c2:6a in network mk-flannel-787042
	I0819 12:27:37.341548  159489 main.go:141] libmachine: (flannel-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:c2:6a", ip: ""} in network mk-flannel-787042: {Iface:virbr1 ExpiryTime:2024-08-19 13:27:02 +0000 UTC Type:0 Mac:52:54:00:34:c2:6a Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:flannel-787042 Clientid:01:52:54:00:34:c2:6a}
	I0819 12:27:37.341576  159489 main.go:141] libmachine: (flannel-787042) DBG | domain flannel-787042 has defined IP address 192.168.61.196 and MAC address 52:54:00:34:c2:6a in network mk-flannel-787042
	I0819 12:27:37.341918  159489 main.go:141] libmachine: (flannel-787042) Calling .GetSSHPort
	I0819 12:27:37.342114  159489 main.go:141] libmachine: (flannel-787042) Calling .GetSSHKeyPath
	I0819 12:27:37.342283  159489 main.go:141] libmachine: (flannel-787042) Calling .GetSSHUsername
	I0819 12:27:37.342408  159489 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/flannel-787042/id_rsa Username:docker}
	I0819 12:27:37.352752  159489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
	I0819 12:27:37.353241  159489 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:27:37.353748  159489 main.go:141] libmachine: Using API Version  1
	I0819 12:27:37.353761  159489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:27:37.354126  159489 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:27:37.354279  159489 main.go:141] libmachine: (flannel-787042) Calling .GetState
	I0819 12:27:37.356194  159489 main.go:141] libmachine: (flannel-787042) Calling .DriverName
	I0819 12:27:37.356432  159489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 12:27:37.356451  159489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 12:27:37.356471  159489 main.go:141] libmachine: (flannel-787042) Calling .GetSSHHostname
	I0819 12:27:37.359376  159489 main.go:141] libmachine: (flannel-787042) DBG | domain flannel-787042 has defined MAC address 52:54:00:34:c2:6a in network mk-flannel-787042
	I0819 12:27:37.359859  159489 main.go:141] libmachine: (flannel-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:c2:6a", ip: ""} in network mk-flannel-787042: {Iface:virbr1 ExpiryTime:2024-08-19 13:27:02 +0000 UTC Type:0 Mac:52:54:00:34:c2:6a Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:flannel-787042 Clientid:01:52:54:00:34:c2:6a}
	I0819 12:27:37.359889  159489 main.go:141] libmachine: (flannel-787042) DBG | domain flannel-787042 has defined IP address 192.168.61.196 and MAC address 52:54:00:34:c2:6a in network mk-flannel-787042
	I0819 12:27:37.360071  159489 main.go:141] libmachine: (flannel-787042) Calling .GetSSHPort
	I0819 12:27:37.360291  159489 main.go:141] libmachine: (flannel-787042) Calling .GetSSHKeyPath
	I0819 12:27:37.360471  159489 main.go:141] libmachine: (flannel-787042) Calling .GetSSHUsername
	I0819 12:27:37.360576  159489 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/flannel-787042/id_rsa Username:docker}
	I0819 12:27:37.446985  159489 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 12:27:37.490486  159489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:27:37.668234  159489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:27:37.704283  159489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:27:38.048967  159489 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0819 12:27:38.050003  159489 node_ready.go:35] waiting up to 15m0s for node "flannel-787042" to be "Ready" ...
	I0819 12:27:38.328381  159489 main.go:141] libmachine: Making call to close driver server
	I0819 12:27:38.328404  159489 main.go:141] libmachine: (flannel-787042) Calling .Close
	I0819 12:27:38.328486  159489 main.go:141] libmachine: Making call to close driver server
	I0819 12:27:38.328509  159489 main.go:141] libmachine: (flannel-787042) Calling .Close
	I0819 12:27:38.328807  159489 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:27:38.328830  159489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:27:38.328834  159489 main.go:141] libmachine: (flannel-787042) DBG | Closing plugin on server side
	I0819 12:27:38.328859  159489 main.go:141] libmachine: (flannel-787042) DBG | Closing plugin on server side
	I0819 12:27:38.328866  159489 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:27:38.328839  159489 main.go:141] libmachine: Making call to close driver server
	I0819 12:27:38.328881  159489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:27:38.328884  159489 main.go:141] libmachine: (flannel-787042) Calling .Close
	I0819 12:27:38.328906  159489 main.go:141] libmachine: Making call to close driver server
	I0819 12:27:38.328915  159489 main.go:141] libmachine: (flannel-787042) Calling .Close
	I0819 12:27:38.329133  159489 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:27:38.329145  159489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:27:38.330687  159489 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:27:38.330699  159489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:27:38.330704  159489 main.go:141] libmachine: (flannel-787042) DBG | Closing plugin on server side
	I0819 12:27:38.347897  159489 main.go:141] libmachine: Making call to close driver server
	I0819 12:27:38.347927  159489 main.go:141] libmachine: (flannel-787042) Calling .Close
	I0819 12:27:38.348224  159489 main.go:141] libmachine: Successfully made call to close driver server
	I0819 12:27:38.348245  159489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 12:27:38.350234  159489 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 12:27:38.242553  161206 machine.go:93] provisionDockerMachine start ...
	I0819 12:27:38.242584  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:27:38.242844  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:38.245549  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.245957  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:38.246000  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.246158  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:38.246357  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.246535  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.246763  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:38.246943  161206 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:38.247141  161206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:27:38.247156  161206 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:27:38.359874  161206 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-814177
	
	I0819 12:27:38.359908  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetMachineName
	I0819 12:27:38.360171  161206 buildroot.go:166] provisioning hostname "kubernetes-upgrade-814177"
	I0819 12:27:38.360199  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetMachineName
	I0819 12:27:38.360403  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:38.362962  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.363362  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:38.363392  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.363653  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:38.363879  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.364097  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.364251  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:38.364435  161206 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:38.364698  161206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:27:38.364713  161206 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-814177 && echo "kubernetes-upgrade-814177" | sudo tee /etc/hostname
	I0819 12:27:38.489055  161206 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-814177
	
	I0819 12:27:38.489096  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:38.491571  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.491911  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:38.491941  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.492164  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:38.492365  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.492552  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.492700  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:38.492863  161206 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:38.493075  161206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:27:38.493093  161206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-814177' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-814177/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-814177' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:27:38.608492  161206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:27:38.608530  161206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:27:38.608577  161206 buildroot.go:174] setting up certificates
	I0819 12:27:38.608595  161206 provision.go:84] configureAuth start
	I0819 12:27:38.608614  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetMachineName
	I0819 12:27:38.609010  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetIP
	I0819 12:27:38.611882  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.612267  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:38.612297  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.612425  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:38.614706  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.615086  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:38.615115  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.615273  161206 provision.go:143] copyHostCerts
	I0819 12:27:38.615340  161206 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:27:38.615363  161206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:27:38.615436  161206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:27:38.615548  161206 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:27:38.615558  161206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:27:38.615589  161206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:27:38.615669  161206 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:27:38.615682  161206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:27:38.615709  161206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:27:38.615791  161206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-814177 san=[127.0.0.1 192.168.50.23 kubernetes-upgrade-814177 localhost minikube]
	I0819 12:27:38.685050  161206 provision.go:177] copyRemoteCerts
	I0819 12:27:38.685122  161206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:27:38.685148  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:38.687826  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.688218  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:38.688251  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.688447  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:38.688653  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.688845  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:38.688986  161206 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa Username:docker}
	I0819 12:27:37.471973  161013 provision.go:177] copyRemoteCerts
	I0819 12:27:37.472041  161013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:27:37.472071  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:37.474920  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.475317  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:37.475341  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.475539  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:37.475702  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:37.475888  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:37.476076  161013 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/bridge-787042/id_rsa Username:docker}
	I0819 12:27:37.563077  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:27:37.588051  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:27:37.619497  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:27:37.646213  161013 provision.go:87] duration metric: took 389.407476ms to configureAuth
	I0819 12:27:37.646250  161013 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:27:37.646431  161013 config.go:182] Loaded profile config "bridge-787042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:27:37.646510  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:37.649586  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.649934  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:37.649966  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.650121  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:37.650303  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:37.650423  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:37.650512  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:37.650715  161013 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:37.650926  161013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0819 12:27:37.650951  161013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:27:37.941129  161013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:27:37.941172  161013 main.go:141] libmachine: Checking connection to Docker...
	I0819 12:27:37.941195  161013 main.go:141] libmachine: (bridge-787042) Calling .GetURL
	I0819 12:27:37.942758  161013 main.go:141] libmachine: (bridge-787042) DBG | Using libvirt version 6000000
	I0819 12:27:37.945509  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.945949  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:37.946004  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.946152  161013 main.go:141] libmachine: Docker is up and running!
	I0819 12:27:37.946164  161013 main.go:141] libmachine: Reticulating splines...
	I0819 12:27:37.946174  161013 client.go:171] duration metric: took 23.0275725s to LocalClient.Create
	I0819 12:27:37.946201  161013 start.go:167] duration metric: took 23.027641266s to libmachine.API.Create "bridge-787042"
	I0819 12:27:37.946214  161013 start.go:293] postStartSetup for "bridge-787042" (driver="kvm2")
	I0819 12:27:37.946229  161013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:27:37.946253  161013 main.go:141] libmachine: (bridge-787042) Calling .DriverName
	I0819 12:27:37.946528  161013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:27:37.946573  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:37.949049  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.949470  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:37.949498  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:37.949711  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:37.949917  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:37.950103  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:37.950296  161013 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/bridge-787042/id_rsa Username:docker}
	I0819 12:27:38.047127  161013 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:27:38.052272  161013 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:27:38.052307  161013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:27:38.052370  161013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:27:38.052466  161013 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:27:38.052587  161013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:27:38.062823  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:27:38.092713  161013 start.go:296] duration metric: took 146.480831ms for postStartSetup
	I0819 12:27:38.092786  161013 main.go:141] libmachine: (bridge-787042) Calling .GetConfigRaw
	I0819 12:27:38.093486  161013 main.go:141] libmachine: (bridge-787042) Calling .GetIP
	I0819 12:27:38.096471  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.096827  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:38.096856  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.097196  161013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/config.json ...
	I0819 12:27:38.097486  161013 start.go:128] duration metric: took 23.200879173s to createHost
	I0819 12:27:38.097519  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:38.099932  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.100260  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:38.100288  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.100477  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:38.100685  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:38.100883  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:38.101020  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:38.101182  161013 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:38.101407  161013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0819 12:27:38.101468  161013 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:27:38.216164  161013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724070458.186179129
	
	I0819 12:27:38.216189  161013 fix.go:216] guest clock: 1724070458.186179129
	I0819 12:27:38.216201  161013 fix.go:229] Guest: 2024-08-19 12:27:38.186179129 +0000 UTC Remote: 2024-08-19 12:27:38.097504511 +0000 UTC m=+40.683294483 (delta=88.674618ms)
	I0819 12:27:38.216226  161013 fix.go:200] guest clock delta is within tolerance: 88.674618ms
	I0819 12:27:38.216233  161013 start.go:83] releasing machines lock for "bridge-787042", held for 23.319822055s
	I0819 12:27:38.216282  161013 main.go:141] libmachine: (bridge-787042) Calling .DriverName
	I0819 12:27:38.216628  161013 main.go:141] libmachine: (bridge-787042) Calling .GetIP
	I0819 12:27:38.219991  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.220484  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:38.220510  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.220708  161013 main.go:141] libmachine: (bridge-787042) Calling .DriverName
	I0819 12:27:38.221312  161013 main.go:141] libmachine: (bridge-787042) Calling .DriverName
	I0819 12:27:38.221506  161013 main.go:141] libmachine: (bridge-787042) Calling .DriverName
	I0819 12:27:38.221595  161013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:27:38.221656  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:38.221779  161013 ssh_runner.go:195] Run: cat /version.json
	I0819 12:27:38.221807  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHHostname
	I0819 12:27:38.225064  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.225335  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.225482  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:38.225509  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.225669  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:38.225705  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:38.225709  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:38.225841  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHPort
	I0819 12:27:38.225903  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:38.226045  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHKeyPath
	I0819 12:27:38.226131  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:38.226208  161013 main.go:141] libmachine: (bridge-787042) Calling .GetSSHUsername
	I0819 12:27:38.226309  161013 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/bridge-787042/id_rsa Username:docker}
	I0819 12:27:38.226378  161013 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/bridge-787042/id_rsa Username:docker}
	I0819 12:27:38.308680  161013 ssh_runner.go:195] Run: systemctl --version
	I0819 12:27:38.334358  161013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:27:38.496274  161013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:27:38.502353  161013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:27:38.502422  161013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:27:38.520297  161013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 12:27:38.520342  161013 start.go:495] detecting cgroup driver to use...
	I0819 12:27:38.520426  161013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:27:38.539465  161013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:27:38.555587  161013 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:27:38.555648  161013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:27:38.569478  161013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:27:38.583523  161013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:27:38.710497  161013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:27:38.902587  161013 docker.go:233] disabling docker service ...
	I0819 12:27:38.902678  161013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:27:38.918388  161013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:27:38.933262  161013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:27:39.053649  161013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:27:39.175909  161013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:27:39.189396  161013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:27:39.207070  161013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:27:39.207143  161013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:39.217571  161013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:27:39.217642  161013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:39.227983  161013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:39.239505  161013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:39.250710  161013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:27:39.261476  161013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:39.272781  161013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:39.291459  161013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:39.302294  161013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:27:39.312268  161013 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 12:27:39.312338  161013 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 12:27:39.325705  161013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:27:39.335611  161013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:27:39.441744  161013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:27:39.572927  161013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:27:39.573016  161013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:27:39.578048  161013 start.go:563] Will wait 60s for crictl version
	I0819 12:27:39.578120  161013 ssh_runner.go:195] Run: which crictl
	I0819 12:27:39.582056  161013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:27:39.618438  161013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:27:39.618543  161013 ssh_runner.go:195] Run: crio --version
	I0819 12:27:39.649606  161013 ssh_runner.go:195] Run: crio --version
	I0819 12:27:39.680638  161013 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:27:38.351552  159489 addons.go:510] duration metric: took 1.057898009s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 12:27:38.554697  159489 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-787042" context rescaled to 1 replicas
	I0819 12:27:40.053673  159489 node_ready.go:53] node "flannel-787042" has status "Ready":"False"
	I0819 12:27:39.681891  161013 main.go:141] libmachine: (bridge-787042) Calling .GetIP
	I0819 12:27:39.685049  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:39.685493  161013 main.go:141] libmachine: (bridge-787042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:36:68", ip: ""} in network mk-bridge-787042: {Iface:virbr2 ExpiryTime:2024-08-19 13:27:30 +0000 UTC Type:0 Mac:52:54:00:88:36:68 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:bridge-787042 Clientid:01:52:54:00:88:36:68}
	I0819 12:27:39.685527  161013 main.go:141] libmachine: (bridge-787042) DBG | domain bridge-787042 has defined IP address 192.168.39.222 and MAC address 52:54:00:88:36:68 in network mk-bridge-787042
	I0819 12:27:39.685793  161013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:27:39.690053  161013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:27:39.705478  161013 kubeadm.go:883] updating cluster {Name:bridge-787042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:bridge-787042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:27:39.705647  161013 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:27:39.705704  161013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:27:39.742434  161013 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 12:27:39.742500  161013 ssh_runner.go:195] Run: which lz4
	I0819 12:27:39.746720  161013 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 12:27:39.751954  161013 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 12:27:39.752010  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 12:27:41.053172  161013 crio.go:462] duration metric: took 1.30648247s to copy over tarball
	I0819 12:27:41.053278  161013 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 12:27:38.781207  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:27:38.809565  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0819 12:27:38.838225  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:27:38.862826  161206 provision.go:87] duration metric: took 254.210112ms to configureAuth
	I0819 12:27:38.862863  161206 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:27:38.863085  161206 config.go:182] Loaded profile config "kubernetes-upgrade-814177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:27:38.863195  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:38.865922  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.866336  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:38.866370  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:38.866574  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:38.866816  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.866963  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:38.867112  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:38.867275  161206 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:38.867460  161206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:27:38.867483  161206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:27:42.554534  159489 node_ready.go:53] node "flannel-787042" has status "Ready":"False"
	I0819 12:27:45.053669  159489 node_ready.go:53] node "flannel-787042" has status "Ready":"False"
	I0819 12:27:47.268571  162691 start.go:364] duration metric: took 19.719422647s to acquireMachinesLock for "old-k8s-version-668313"
	I0819 12:27:47.268649  162691 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-668313 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-668313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:27:47.268759  162691 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 12:27:47.270451  162691 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:27:47.270654  162691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:27:47.270706  162691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:27:47.288381  162691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0819 12:27:47.288834  162691 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:27:47.289487  162691 main.go:141] libmachine: Using API Version  1
	I0819 12:27:47.289511  162691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:27:47.289848  162691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:27:47.290036  162691 main.go:141] libmachine: (old-k8s-version-668313) Calling .GetMachineName
	I0819 12:27:47.290211  162691 main.go:141] libmachine: (old-k8s-version-668313) Calling .DriverName
	I0819 12:27:47.290347  162691 start.go:159] libmachine.API.Create for "old-k8s-version-668313" (driver="kvm2")
	I0819 12:27:47.290381  162691 client.go:168] LocalClient.Create starting
	I0819 12:27:47.290418  162691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 12:27:47.290456  162691 main.go:141] libmachine: Decoding PEM data...
	I0819 12:27:47.290474  162691 main.go:141] libmachine: Parsing certificate...
	I0819 12:27:47.290552  162691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 12:27:47.290582  162691 main.go:141] libmachine: Decoding PEM data...
	I0819 12:27:47.290602  162691 main.go:141] libmachine: Parsing certificate...
	I0819 12:27:47.290631  162691 main.go:141] libmachine: Running pre-create checks...
	I0819 12:27:47.290641  162691 main.go:141] libmachine: (old-k8s-version-668313) Calling .PreCreateCheck
	I0819 12:27:47.291055  162691 main.go:141] libmachine: (old-k8s-version-668313) Calling .GetConfigRaw
	I0819 12:27:47.291428  162691 main.go:141] libmachine: Creating machine...
	I0819 12:27:47.291444  162691 main.go:141] libmachine: (old-k8s-version-668313) Calling .Create
	I0819 12:27:47.291583  162691 main.go:141] libmachine: (old-k8s-version-668313) Creating KVM machine...
	I0819 12:27:47.292940  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | found existing default KVM network
	I0819 12:27:47.294307  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:47.294160  162884 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0e:09:65} reservation:<nil>}
	I0819 12:27:47.295319  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:47.295223  162884 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:ea:96} reservation:<nil>}
	I0819 12:27:47.296288  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:47.296205  162884 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a1:7b:05} reservation:<nil>}
	I0819 12:27:47.297497  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:47.297412  162884 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000323dd0}
	I0819 12:27:47.297521  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | created network xml: 
	I0819 12:27:47.297533  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | <network>
	I0819 12:27:47.297542  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |   <name>mk-old-k8s-version-668313</name>
	I0819 12:27:47.297553  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |   <dns enable='no'/>
	I0819 12:27:47.297564  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |   
	I0819 12:27:47.297575  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0819 12:27:47.297595  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |     <dhcp>
	I0819 12:27:47.297677  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0819 12:27:47.297709  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |     </dhcp>
	I0819 12:27:47.297720  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |   </ip>
	I0819 12:27:47.297727  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG |   
	I0819 12:27:47.297734  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | </network>
	I0819 12:27:47.297744  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | 
	I0819 12:27:47.303408  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | trying to create private KVM network mk-old-k8s-version-668313 192.168.72.0/24...
	I0819 12:27:47.382087  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | private KVM network mk-old-k8s-version-668313 192.168.72.0/24 created
	I0819 12:27:47.382130  162691 main.go:141] libmachine: (old-k8s-version-668313) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/old-k8s-version-668313 ...
	I0819 12:27:47.382158  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:47.382049  162884 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:27:47.382172  162691 main.go:141] libmachine: (old-k8s-version-668313) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 12:27:47.382280  162691 main.go:141] libmachine: (old-k8s-version-668313) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 12:27:43.401583  161013 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.348263639s)
	I0819 12:27:43.401618  161013 crio.go:469] duration metric: took 2.348408779s to extract the tarball
	I0819 12:27:43.401629  161013 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 12:27:43.440508  161013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:27:43.481339  161013 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:27:43.481373  161013 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:27:43.481383  161013 kubeadm.go:934] updating node { 192.168.39.222 8443 v1.31.0 crio true true} ...
	I0819 12:27:43.481531  161013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-787042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:bridge-787042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0819 12:27:43.481636  161013 ssh_runner.go:195] Run: crio config
	I0819 12:27:43.543201  161013 cni.go:84] Creating CNI manager for "bridge"
	I0819 12:27:43.543224  161013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:27:43.543246  161013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-787042 NodeName:bridge-787042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:27:43.543372  161013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-787042"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:27:43.543430  161013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:27:43.556157  161013 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:27:43.556236  161013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:27:43.567113  161013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 12:27:43.588478  161013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:27:43.608223  161013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 12:27:43.628509  161013 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0819 12:27:43.633304  161013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:27:43.647584  161013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:27:43.761438  161013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:27:43.781373  161013 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042 for IP: 192.168.39.222
	I0819 12:27:43.781403  161013 certs.go:194] generating shared ca certs ...
	I0819 12:27:43.781420  161013 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:43.781613  161013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:27:43.781673  161013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:27:43.781685  161013 certs.go:256] generating profile certs ...
	I0819 12:27:43.781759  161013 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/client.key
	I0819 12:27:43.781776  161013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/client.crt with IP's: []
	I0819 12:27:43.842019  161013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/client.crt ...
	I0819 12:27:43.842047  161013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/client.crt: {Name:mkc8e02c2dc9f591a6723c6f6ca166e0175d5987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:43.842236  161013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/client.key ...
	I0819 12:27:43.842249  161013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/client.key: {Name:mk9adb9ca5117c20bb11490be40c779a5630720f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:43.842351  161013 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.key.d99dbdde
	I0819 12:27:43.842366  161013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.crt.d99dbdde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.222]
	I0819 12:27:44.293957  161013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.crt.d99dbdde ...
	I0819 12:27:44.293998  161013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.crt.d99dbdde: {Name:mk3870c6b8012d951b32514499c314c972c18de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:44.294230  161013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.key.d99dbdde ...
	I0819 12:27:44.294252  161013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.key.d99dbdde: {Name:mk3beadbbb5a62d1dc3ae2e1df8bad4e666d6964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:44.294392  161013 certs.go:381] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.crt.d99dbdde -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.crt
	I0819 12:27:44.294511  161013 certs.go:385] copying /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.key.d99dbdde -> /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.key
	I0819 12:27:44.294599  161013 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/proxy-client.key
	I0819 12:27:44.294622  161013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/proxy-client.crt with IP's: []
	I0819 12:27:44.452825  161013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/proxy-client.crt ...
	I0819 12:27:44.452855  161013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/proxy-client.crt: {Name:mk3e9ffd69762876ef99ba8dab44b7e495c37f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:44.453022  161013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/proxy-client.key ...
	I0819 12:27:44.453034  161013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/proxy-client.key: {Name:mkaa01455d67358286cf3c22085a954697182243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:44.453193  161013 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:27:44.453229  161013 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:27:44.453238  161013 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:27:44.453261  161013 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:27:44.453283  161013 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:27:44.453305  161013 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:27:44.453343  161013 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:27:44.453923  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:27:44.493586  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:27:44.533424  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:27:44.560409  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:27:44.585919  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 12:27:44.612585  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:27:44.639802  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:27:44.683872  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:27:44.710523  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:27:44.734461  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:27:44.758844  161013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:27:44.782799  161013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:27:44.799253  161013 ssh_runner.go:195] Run: openssl version
	I0819 12:27:44.805266  161013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:27:44.816082  161013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:27:44.820339  161013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:27:44.820395  161013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:27:44.825934  161013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:27:44.837257  161013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:27:44.848412  161013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:27:44.853340  161013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:27:44.853408  161013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:27:44.859536  161013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:27:44.870713  161013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:27:44.882891  161013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:27:44.887199  161013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:27:44.887272  161013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:27:44.893164  161013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:27:44.903517  161013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:27:44.907693  161013 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:27:44.907773  161013 kubeadm.go:392] StartCluster: {Name:bridge-787042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:bridge-787042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:27:44.907869  161013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:27:44.907931  161013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:27:44.944098  161013 cri.go:89] found id: ""
	I0819 12:27:44.944188  161013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:27:44.954341  161013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 12:27:44.963583  161013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 12:27:44.973330  161013 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 12:27:44.973351  161013 kubeadm.go:157] found existing configuration files:
	
	I0819 12:27:44.973406  161013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 12:27:44.982374  161013 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 12:27:44.982452  161013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 12:27:44.992554  161013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 12:27:45.001364  161013 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 12:27:45.001433  161013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 12:27:45.011014  161013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 12:27:45.022096  161013 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 12:27:45.022153  161013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 12:27:45.031495  161013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 12:27:45.040262  161013 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 12:27:45.040332  161013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 12:27:45.049143  161013 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 12:27:45.099365  161013 kubeadm.go:310] W0819 12:27:45.079004     855 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:27:45.100182  161013 kubeadm.go:310] W0819 12:27:45.079920     855 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:27:45.201532  161013 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 12:27:47.010870  161206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:27:47.010902  161206 machine.go:96] duration metric: took 8.768327439s to provisionDockerMachine
	I0819 12:27:47.010922  161206 start.go:293] postStartSetup for "kubernetes-upgrade-814177" (driver="kvm2")
	I0819 12:27:47.010937  161206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:27:47.010963  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:27:47.011375  161206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:27:47.011411  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:47.015013  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.015435  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:47.015468  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.015889  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:47.016120  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:47.016319  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:47.016497  161206 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa Username:docker}
	I0819 12:27:47.103010  161206 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:27:47.107250  161206 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:27:47.107287  161206 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:27:47.107369  161206 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:27:47.107478  161206 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:27:47.107583  161206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:27:47.117262  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:27:47.149770  161206 start.go:296] duration metric: took 138.82881ms for postStartSetup
	I0819 12:27:47.149824  161206 fix.go:56] duration metric: took 8.933423708s for fixHost
	I0819 12:27:47.149852  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:47.152793  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.153161  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:47.153196  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.153374  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:47.153636  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:47.153822  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:47.153989  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:47.154200  161206 main.go:141] libmachine: Using SSH client type: native
	I0819 12:27:47.154445  161206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0819 12:27:47.154464  161206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:27:47.268398  161206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724070467.261407544
	
	I0819 12:27:47.268428  161206 fix.go:216] guest clock: 1724070467.261407544
	I0819 12:27:47.268437  161206 fix.go:229] Guest: 2024-08-19 12:27:47.261407544 +0000 UTC Remote: 2024-08-19 12:27:47.149830134 +0000 UTC m=+43.427038908 (delta=111.57741ms)
	I0819 12:27:47.268463  161206 fix.go:200] guest clock delta is within tolerance: 111.57741ms
	I0819 12:27:47.268471  161206 start.go:83] releasing machines lock for "kubernetes-upgrade-814177", held for 9.052111674s
	I0819 12:27:47.268503  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:27:47.268800  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetIP
	I0819 12:27:47.271817  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.272196  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:47.272227  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.272405  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:27:47.272981  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:27:47.273153  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .DriverName
	I0819 12:27:47.273260  161206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:27:47.273311  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:47.273333  161206 ssh_runner.go:195] Run: cat /version.json
	I0819 12:27:47.273353  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHHostname
	I0819 12:27:47.276100  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.276368  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.276529  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:47.276598  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.276730  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:47.276750  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:47.276791  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:47.276960  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:47.277023  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHPort
	I0819 12:27:47.277144  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:47.277223  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHKeyPath
	I0819 12:27:47.277303  161206 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa Username:docker}
	I0819 12:27:47.277703  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetSSHUsername
	I0819 12:27:47.277846  161206 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/kubernetes-upgrade-814177/id_rsa Username:docker}
	I0819 12:27:47.386544  161206 ssh_runner.go:195] Run: systemctl --version
	I0819 12:27:47.394722  161206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:27:47.556414  161206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:27:47.566576  161206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:27:47.566657  161206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:27:47.577957  161206 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:27:47.577985  161206 start.go:495] detecting cgroup driver to use...
	I0819 12:27:47.578060  161206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:27:47.595174  161206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:27:47.614807  161206 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:27:47.614897  161206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:27:47.635125  161206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:27:47.652059  161206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:27:47.815908  161206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:27:48.030835  161206 docker.go:233] disabling docker service ...
	I0819 12:27:48.030901  161206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:27:48.131378  161206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:27:48.242696  161206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:27:48.489735  161206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:27:48.896557  161206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:27:48.995419  161206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:27:49.190454  161206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:27:49.190521  161206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:49.291812  161206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:27:49.291894  161206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:49.419819  161206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:49.490619  161206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:49.518952  161206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:27:49.540090  161206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:49.573312  161206 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:49.594638  161206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:27:49.615140  161206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:27:49.631819  161206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:27:49.648747  161206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:27:49.875370  161206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:27:50.894492  161206 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.019080405s)
	I0819 12:27:50.894537  161206 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:27:50.894597  161206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:27:50.900789  161206 start.go:563] Will wait 60s for crictl version
	I0819 12:27:50.900860  161206 ssh_runner.go:195] Run: which crictl
	I0819 12:27:50.905808  161206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:27:50.942845  161206 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:27:50.942964  161206 ssh_runner.go:195] Run: crio --version
	I0819 12:27:50.978782  161206 ssh_runner.go:195] Run: crio --version
	I0819 12:27:51.013311  161206 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:27:47.054604  159489 node_ready.go:53] node "flannel-787042" has status "Ready":"False"
	I0819 12:27:47.559623  159489 node_ready.go:49] node "flannel-787042" has status "Ready":"True"
	I0819 12:27:47.559652  159489 node_ready.go:38] duration metric: took 9.509623189s for node "flannel-787042" to be "Ready" ...
	I0819 12:27:47.559664  159489 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:27:47.569396  159489 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-jsq4k" in "kube-system" namespace to be "Ready" ...
	I0819 12:27:49.578220  159489 pod_ready.go:103] pod "coredns-6f6b679f8f-jsq4k" in "kube-system" namespace has status "Ready":"False"
	I0819 12:27:47.653680  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:47.653571  162884 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/old-k8s-version-668313/id_rsa...
	I0819 12:27:47.814310  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:47.814164  162884 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/old-k8s-version-668313/old-k8s-version-668313.rawdisk...
	I0819 12:27:47.814345  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Writing magic tar header
	I0819 12:27:47.814363  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Writing SSH key tar header
	I0819 12:27:47.814375  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:47.814336  162884 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/old-k8s-version-668313 ...
	I0819 12:27:47.814535  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/old-k8s-version-668313
	I0819 12:27:47.814563  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 12:27:47.814576  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:27:47.814587  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 12:27:47.814606  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:27:47.814620  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:27:47.814632  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Checking permissions on dir: /home
	I0819 12:27:47.814645  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | Skipping /home - not owner
	I0819 12:27:47.814664  162691 main.go:141] libmachine: (old-k8s-version-668313) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/old-k8s-version-668313 (perms=drwx------)
	I0819 12:27:47.814698  162691 main.go:141] libmachine: (old-k8s-version-668313) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:27:47.814712  162691 main.go:141] libmachine: (old-k8s-version-668313) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 12:27:47.814726  162691 main.go:141] libmachine: (old-k8s-version-668313) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 12:27:47.814742  162691 main.go:141] libmachine: (old-k8s-version-668313) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:27:47.814750  162691 main.go:141] libmachine: (old-k8s-version-668313) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:27:47.814764  162691 main.go:141] libmachine: (old-k8s-version-668313) Creating domain...
	I0819 12:27:47.816117  162691 main.go:141] libmachine: (old-k8s-version-668313) define libvirt domain using xml: 
	I0819 12:27:47.816138  162691 main.go:141] libmachine: (old-k8s-version-668313) <domain type='kvm'>
	I0819 12:27:47.816155  162691 main.go:141] libmachine: (old-k8s-version-668313)   <name>old-k8s-version-668313</name>
	I0819 12:27:47.816167  162691 main.go:141] libmachine: (old-k8s-version-668313)   <memory unit='MiB'>2200</memory>
	I0819 12:27:47.816176  162691 main.go:141] libmachine: (old-k8s-version-668313)   <vcpu>2</vcpu>
	I0819 12:27:47.816189  162691 main.go:141] libmachine: (old-k8s-version-668313)   <features>
	I0819 12:27:47.816203  162691 main.go:141] libmachine: (old-k8s-version-668313)     <acpi/>
	I0819 12:27:47.816213  162691 main.go:141] libmachine: (old-k8s-version-668313)     <apic/>
	I0819 12:27:47.816222  162691 main.go:141] libmachine: (old-k8s-version-668313)     <pae/>
	I0819 12:27:47.816232  162691 main.go:141] libmachine: (old-k8s-version-668313)     
	I0819 12:27:47.816241  162691 main.go:141] libmachine: (old-k8s-version-668313)   </features>
	I0819 12:27:47.816250  162691 main.go:141] libmachine: (old-k8s-version-668313)   <cpu mode='host-passthrough'>
	I0819 12:27:47.816262  162691 main.go:141] libmachine: (old-k8s-version-668313)   
	I0819 12:27:47.816268  162691 main.go:141] libmachine: (old-k8s-version-668313)   </cpu>
	I0819 12:27:47.816276  162691 main.go:141] libmachine: (old-k8s-version-668313)   <os>
	I0819 12:27:47.816283  162691 main.go:141] libmachine: (old-k8s-version-668313)     <type>hvm</type>
	I0819 12:27:47.816291  162691 main.go:141] libmachine: (old-k8s-version-668313)     <boot dev='cdrom'/>
	I0819 12:27:47.816299  162691 main.go:141] libmachine: (old-k8s-version-668313)     <boot dev='hd'/>
	I0819 12:27:47.816307  162691 main.go:141] libmachine: (old-k8s-version-668313)     <bootmenu enable='no'/>
	I0819 12:27:47.816315  162691 main.go:141] libmachine: (old-k8s-version-668313)   </os>
	I0819 12:27:47.816322  162691 main.go:141] libmachine: (old-k8s-version-668313)   <devices>
	I0819 12:27:47.816331  162691 main.go:141] libmachine: (old-k8s-version-668313)     <disk type='file' device='cdrom'>
	I0819 12:27:47.816345  162691 main.go:141] libmachine: (old-k8s-version-668313)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/old-k8s-version-668313/boot2docker.iso'/>
	I0819 12:27:47.816352  162691 main.go:141] libmachine: (old-k8s-version-668313)       <target dev='hdc' bus='scsi'/>
	I0819 12:27:47.816360  162691 main.go:141] libmachine: (old-k8s-version-668313)       <readonly/>
	I0819 12:27:47.816367  162691 main.go:141] libmachine: (old-k8s-version-668313)     </disk>
	I0819 12:27:47.816377  162691 main.go:141] libmachine: (old-k8s-version-668313)     <disk type='file' device='disk'>
	I0819 12:27:47.816387  162691 main.go:141] libmachine: (old-k8s-version-668313)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:27:47.816401  162691 main.go:141] libmachine: (old-k8s-version-668313)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/old-k8s-version-668313/old-k8s-version-668313.rawdisk'/>
	I0819 12:27:47.816410  162691 main.go:141] libmachine: (old-k8s-version-668313)       <target dev='hda' bus='virtio'/>
	I0819 12:27:47.816418  162691 main.go:141] libmachine: (old-k8s-version-668313)     </disk>
	I0819 12:27:47.816426  162691 main.go:141] libmachine: (old-k8s-version-668313)     <interface type='network'>
	I0819 12:27:47.816436  162691 main.go:141] libmachine: (old-k8s-version-668313)       <source network='mk-old-k8s-version-668313'/>
	I0819 12:27:47.816444  162691 main.go:141] libmachine: (old-k8s-version-668313)       <model type='virtio'/>
	I0819 12:27:47.816452  162691 main.go:141] libmachine: (old-k8s-version-668313)     </interface>
	I0819 12:27:47.816460  162691 main.go:141] libmachine: (old-k8s-version-668313)     <interface type='network'>
	I0819 12:27:47.816468  162691 main.go:141] libmachine: (old-k8s-version-668313)       <source network='default'/>
	I0819 12:27:47.816475  162691 main.go:141] libmachine: (old-k8s-version-668313)       <model type='virtio'/>
	I0819 12:27:47.816483  162691 main.go:141] libmachine: (old-k8s-version-668313)     </interface>
	I0819 12:27:47.816490  162691 main.go:141] libmachine: (old-k8s-version-668313)     <serial type='pty'>
	I0819 12:27:47.816499  162691 main.go:141] libmachine: (old-k8s-version-668313)       <target port='0'/>
	I0819 12:27:47.816508  162691 main.go:141] libmachine: (old-k8s-version-668313)     </serial>
	I0819 12:27:47.816516  162691 main.go:141] libmachine: (old-k8s-version-668313)     <console type='pty'>
	I0819 12:27:47.816524  162691 main.go:141] libmachine: (old-k8s-version-668313)       <target type='serial' port='0'/>
	I0819 12:27:47.816533  162691 main.go:141] libmachine: (old-k8s-version-668313)     </console>
	I0819 12:27:47.816540  162691 main.go:141] libmachine: (old-k8s-version-668313)     <rng model='virtio'>
	I0819 12:27:47.816550  162691 main.go:141] libmachine: (old-k8s-version-668313)       <backend model='random'>/dev/random</backend>
	I0819 12:27:47.816557  162691 main.go:141] libmachine: (old-k8s-version-668313)     </rng>
	I0819 12:27:47.816568  162691 main.go:141] libmachine: (old-k8s-version-668313)     
	I0819 12:27:47.816575  162691 main.go:141] libmachine: (old-k8s-version-668313)     
	I0819 12:27:47.816583  162691 main.go:141] libmachine: (old-k8s-version-668313)   </devices>
	I0819 12:27:47.816591  162691 main.go:141] libmachine: (old-k8s-version-668313) </domain>
	I0819 12:27:47.816601  162691 main.go:141] libmachine: (old-k8s-version-668313) 
	I0819 12:27:47.821027  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:27:ef:cd in network default
	I0819 12:27:47.821627  162691 main.go:141] libmachine: (old-k8s-version-668313) Ensuring networks are active...
	I0819 12:27:47.821653  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:23:d9:d3 in network mk-old-k8s-version-668313
	I0819 12:27:47.822558  162691 main.go:141] libmachine: (old-k8s-version-668313) Ensuring network default is active
	I0819 12:27:47.822997  162691 main.go:141] libmachine: (old-k8s-version-668313) Ensuring network mk-old-k8s-version-668313 is active
	I0819 12:27:47.823918  162691 main.go:141] libmachine: (old-k8s-version-668313) Getting domain xml...
	I0819 12:27:47.824913  162691 main.go:141] libmachine: (old-k8s-version-668313) Creating domain...
	I0819 12:27:49.321196  162691 main.go:141] libmachine: (old-k8s-version-668313) Waiting to get IP...
	I0819 12:27:49.322280  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:23:d9:d3 in network mk-old-k8s-version-668313
	I0819 12:27:49.322810  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | unable to find current IP address of domain old-k8s-version-668313 in network mk-old-k8s-version-668313
	I0819 12:27:49.322836  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:49.322782  162884 retry.go:31] will retry after 205.190831ms: waiting for machine to come up
	I0819 12:27:49.529420  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:23:d9:d3 in network mk-old-k8s-version-668313
	I0819 12:27:49.530220  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | unable to find current IP address of domain old-k8s-version-668313 in network mk-old-k8s-version-668313
	I0819 12:27:49.530243  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:49.530174  162884 retry.go:31] will retry after 238.50099ms: waiting for machine to come up
	I0819 12:27:49.770957  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:23:d9:d3 in network mk-old-k8s-version-668313
	I0819 12:27:49.771549  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | unable to find current IP address of domain old-k8s-version-668313 in network mk-old-k8s-version-668313
	I0819 12:27:49.771700  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:49.771613  162884 retry.go:31] will retry after 460.321436ms: waiting for machine to come up
	I0819 12:27:50.233353  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:23:d9:d3 in network mk-old-k8s-version-668313
	I0819 12:27:50.233898  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | unable to find current IP address of domain old-k8s-version-668313 in network mk-old-k8s-version-668313
	I0819 12:27:50.233920  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:50.233836  162884 retry.go:31] will retry after 410.419945ms: waiting for machine to come up
	I0819 12:27:50.646480  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:23:d9:d3 in network mk-old-k8s-version-668313
	I0819 12:27:50.647090  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | unable to find current IP address of domain old-k8s-version-668313 in network mk-old-k8s-version-668313
	I0819 12:27:50.647123  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:50.647054  162884 retry.go:31] will retry after 463.548599ms: waiting for machine to come up
	I0819 12:27:51.112701  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:23:d9:d3 in network mk-old-k8s-version-668313
	I0819 12:27:51.113244  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | unable to find current IP address of domain old-k8s-version-668313 in network mk-old-k8s-version-668313
	I0819 12:27:51.113270  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:51.113195  162884 retry.go:31] will retry after 647.006665ms: waiting for machine to come up
	I0819 12:27:51.762145  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | domain old-k8s-version-668313 has defined MAC address 52:54:00:23:d9:d3 in network mk-old-k8s-version-668313
	I0819 12:27:51.762696  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | unable to find current IP address of domain old-k8s-version-668313 in network mk-old-k8s-version-668313
	I0819 12:27:51.762726  162691 main.go:141] libmachine: (old-k8s-version-668313) DBG | I0819 12:27:51.762656  162884 retry.go:31] will retry after 1.050701155s: waiting for machine to come up
	I0819 12:27:51.014683  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) Calling .GetIP
	I0819 12:27:51.017839  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:51.018256  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:b8:db", ip: ""} in network mk-kubernetes-upgrade-814177: {Iface:virbr3 ExpiryTime:2024-08-19 13:26:39 +0000 UTC Type:0 Mac:52:54:00:0c:b8:db Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-814177 Clientid:01:52:54:00:0c:b8:db}
	I0819 12:27:51.018288  161206 main.go:141] libmachine: (kubernetes-upgrade-814177) DBG | domain kubernetes-upgrade-814177 has defined IP address 192.168.50.23 and MAC address 52:54:00:0c:b8:db in network mk-kubernetes-upgrade-814177
	I0819 12:27:51.018548  161206 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 12:27:51.023421  161206 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-814177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-814177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:27:51.023535  161206 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:27:51.023591  161206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:27:51.071205  161206 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:27:51.071236  161206 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:27:51.071300  161206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:27:51.109961  161206 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:27:51.109991  161206 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:27:51.110001  161206 kubeadm.go:934] updating node { 192.168.50.23 8443 v1.31.0 crio true true} ...
	I0819 12:27:51.110130  161206 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-814177 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-814177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:27:51.110209  161206 ssh_runner.go:195] Run: crio config
	I0819 12:27:51.166616  161206 cni.go:84] Creating CNI manager for ""
	I0819 12:27:51.166644  161206 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:27:51.166656  161206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:27:51.166688  161206 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.23 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-814177 NodeName:kubernetes-upgrade-814177 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:27:51.166900  161206 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-814177"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:27:51.166989  161206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:27:51.180715  161206 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:27:51.180808  161206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:27:51.193947  161206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0819 12:27:51.214282  161206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:27:51.231889  161206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0819 12:27:51.252699  161206 ssh_runner.go:195] Run: grep 192.168.50.23	control-plane.minikube.internal$ /etc/hosts
	I0819 12:27:51.257252  161206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:27:51.534311  161206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:27:51.614714  161206 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177 for IP: 192.168.50.23
	I0819 12:27:51.614740  161206 certs.go:194] generating shared ca certs ...
	I0819 12:27:51.614762  161206 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:27:51.614944  161206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:27:51.615005  161206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:27:51.615014  161206 certs.go:256] generating profile certs ...
	I0819 12:27:51.615118  161206 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/client.key
	I0819 12:27:51.615174  161206 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.key.1b2c2bf2
	I0819 12:27:51.615217  161206 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.key
	I0819 12:27:51.615359  161206 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:27:51.615392  161206 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:27:51.615400  161206 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:27:51.615432  161206 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:27:51.615459  161206 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:27:51.615485  161206 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:27:51.615540  161206 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:27:51.616465  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:27:51.811672  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:27:51.889864  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:27:51.997287  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:27:52.063691  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 12:27:52.108265  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:27:52.147640  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:27:52.221359  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/kubernetes-upgrade-814177/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:27:52.271302  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:27:52.315356  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:27:52.398123  161206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:27:52.453562  161206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:27:52.498829  161206 ssh_runner.go:195] Run: openssl version
	I0819 12:27:52.507984  161206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:27:52.544569  161206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:27:52.549505  161206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:27:52.549588  161206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:27:52.558533  161206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:27:52.570859  161206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:27:52.585160  161206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:27:52.589718  161206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:27:52.589798  161206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:27:52.595431  161206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:27:52.606483  161206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:27:52.617662  161206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:27:52.623526  161206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:27:52.623599  161206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:27:52.645692  161206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:27:52.676325  161206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:27:52.682652  161206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:27:52.688895  161206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:27:52.694422  161206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:27:52.700224  161206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:27:52.706004  161206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:27:52.711740  161206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:27:52.717230  161206 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-814177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-814177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:27:52.717345  161206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:27:52.717394  161206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:27:52.764696  161206 cri.go:89] found id: "9598b04453d89cd6ce3d4dd6c7629286650a6a345ded2d6d2749b82784a2792d"
	I0819 12:27:52.764725  161206 cri.go:89] found id: "5eb17d444ee72978731602c861495f7a423ded2cf54a025bacce5e0a1bbfe20e"
	I0819 12:27:52.764731  161206 cri.go:89] found id: "1c3fd2a7d5b94175873114e6d65a984db0c9862e59e98ef63616570a777fa2d1"
	I0819 12:27:52.764735  161206 cri.go:89] found id: "0e7c7306d316b5c910ad016b8b46a5f06d031929386b7d42b7a4e7a1826abb73"
	I0819 12:27:52.764740  161206 cri.go:89] found id: "303dc84dee1f747221a49b9301955e70afd9ab3e16ebf43124f540edb3a76f98"
	I0819 12:27:52.764744  161206 cri.go:89] found id: "866f3a5c577ffd28fd6aaff38dc715c118ac4c36d86a8cbf80bde65a679cb1d2"
	I0819 12:27:52.764748  161206 cri.go:89] found id: "45dd0720271fefccbb9aa1801c7467b67db245d398ab2aceb24a845682a9f6ff"
	I0819 12:27:52.764771  161206 cri.go:89] found id: "0da63d78797725d087a2cb1fecf380db6d2848c6fec3f15f6bf4ed1022dabf3b"
	I0819 12:27:52.764776  161206 cri.go:89] found id: "003762aa1568c4b2a107a3bdc1e88aa89ccf80248edb4a9b4243a694fc57cb64"
	I0819 12:27:52.764792  161206 cri.go:89] found id: "836e450593208996ee58cd927ed6ec10c0a2d61cf6b371142bc8c635743c9dba"
	I0819 12:27:52.764800  161206 cri.go:89] found id: ""
	I0819 12:27:52.764854  161206 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.014348519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070496014320828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ffe34f2-78ba-4e70-afe7-2cb0c49e4a0a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.014733611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa0a55c7-154c-4898-b9d6-3e6f3c0bb693 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.014859387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa0a55c7-154c-4898-b9d6-3e6f3c0bb693 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.015306829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92742d9f8174689d965ddf312491c08e9026d6aad9247ba62d2c6c7c17e129ff,PodSandboxId:d841a2bb66d62f727f639c91960a1d293d0a6a1b74066354af03664ed5698cd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070492374022606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hxktk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4757a3-c4d5-4c8e-ac15-7d9f3c8ebdb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8343f3278191a5fe9db0fbb0e27f74b20292cb45126c31c3af8a228c34188622,PodSandboxId:5f9a47248bb2804a1e458f2c91c7c8beabd92f45f1571f24f260dc83aaa52c41,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070492368133339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b8defaa8b605f7dcd4e9f3ffe08a4a4ee104879f800e7355bdb73383edae35,PodSandboxId:c5093b23fedf3302304fbd04b9aff0367b8a5e1eefdefc1cdf766d11e6415f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070492344325095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qk6jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 87423f1a-80ea-45b8-aeab-44b6b062799c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bb360cd40c8400166e66232daa865bc11073c78767330ed2f2b07fbe12b73f,PodSandboxId:37403351bb5d6d2e8b88cdcec82568a68f37674afdab4567b9f29b01aadfc0c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNI
NG,CreatedAt:1724070489490321464,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4fb1002898663faa99a13b77c1e7536,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bb02685ac28e6d6d56e359a0023e01c1de7d5441ee2da0de542a75ecd603557,PodSandboxId:1ea7703be9320957879c51d4dc7d776a31b4735a19ed2ac946512a2eac0d0cff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,Cr
eatedAt:1724070489477757960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e8a5dfd1d791aa5334df20dbbc6f92,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba92943fb925e7199d3b162d29c34bba33cd3d44d92685e130aba0e9f16b5b9d,PodSandboxId:ad407ccb0028bc71790443ddbeb7fe166f39ea141bd92cb7c922b340b8616f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1
724070486747444214,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zn56f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f080cfb1-ca13-4c35-9cc2-3be1b4b937b8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830196092c1ae1954ec4b2bb84853f3e47e2577552ce7037daf1e5585465be9a,PodSandboxId:6f8e5db78f06ea80f174ccf28f62554309837168561fe8f0e3fbf92b5eb59ec3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070485744833886,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca70c57e73c2b5097176ac30e8268c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93e1e00ad0162919680e88c1d69b99b7172fb2f6a6b397fbb7f76a67eda83ab,PodSandboxId:30c17d332500ca434d746510173ed555b4d2515465ed811c7acc5cc9ccba43a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070480713999138
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825cbaf20f643ee2cc47f46b826a6055,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9598b04453d89cd6ce3d4dd6c7629286650a6a345ded2d6d2749b82784a2792d,PodSandboxId:c5093b23fedf3302304fbd04b9aff0367b8a5e1eefdefc1cdf766d11e6415f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070472359899990,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qk6jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87423f1a-80ea-45b8-aeab-44b6b062799c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb17d444ee72978731602c861495f7a423ded2cf54a025bacce5e0a1bbfe20e,PodSandboxId:d841a2bb66d62f727f639c91960a1d293d0a6a1b74066354af03664ed5698cd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070472323189964,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hxktk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4757a3-c4d5-4c8e-ac15-7d9f3c8ebdb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e7c7306d316b5c910ad016b8b46a5f06d031929386b7d42b7a4e7a1826abb73,PodSandboxId:4444b5bab728c8e060786cef3d1559a0e31b679aa79647a
7da31e86265334cd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070469048258396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zn56f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f080cfb1-ca13-4c35-9cc2-3be1b4b937b8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3fd2a7d5b94175873114e6d65a984db0c9862e59e98ef63616570a777fa2d1,PodSandboxId:183fc2d90f7607cc2941f86c2250236ca94d666807eae592bcc4d3aff1c1e0fd,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070469246238892,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303dc84dee1f747221a49b9301955e70afd9ab3e16ebf43124f540edb3a76f98,PodSandboxId:a9ec712790a1dfc0139d56263b61862da505cb60f1d5636afe440c72b6e71214,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070468857376580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4fb1002898663faa99a13b77c1e7536,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866f3a5c577ffd28fd6aaff38dc715c118ac4c36d86a8cbf80bde65a679cb1d2,PodSandboxId:f58682cabb54d3954a7c36ba1d0fdc1a1550939455260ff35b77bbc69ce5760c,Metadata:&ContainerMetadata{Name:etc
d,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070468727672448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825cbaf20f643ee2cc47f46b826a6055,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dd0720271fefccbb9aa1801c7467b67db245d398ab2aceb24a845682a9f6ff,PodSandboxId:ba7b8e1f69ad1d8d10e31cb7c334a2c123ee192c21910afe0ed9284373737683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Ima
ge:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070468435225476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca70c57e73c2b5097176ac30e8268c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da63d78797725d087a2cb1fecf380db6d2848c6fec3f15f6bf4ed1022dabf3b,PodSandboxId:270ee51a0921699d7155cf23d6e3b9780b2bcc29dbe992e437018cdf345b3cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070468337094693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e8a5dfd1d791aa5334df20dbbc6f92,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa0a55c7-154c-4898-b9d6-3e6f3c0bb693 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.067118927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4a38b9a-f576-4585-8e6b-2ab20b82c0ba name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.067241805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4a38b9a-f576-4585-8e6b-2ab20b82c0ba name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.069139178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20b360b7-0968-4b46-81e9-857f15e5ab4b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.069647227Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070496069616951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20b360b7-0968-4b46-81e9-857f15e5ab4b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.070313088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5722c624-963b-4ba3-beb2-d272563d7ef9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.070514852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5722c624-963b-4ba3-beb2-d272563d7ef9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.071199728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92742d9f8174689d965ddf312491c08e9026d6aad9247ba62d2c6c7c17e129ff,PodSandboxId:d841a2bb66d62f727f639c91960a1d293d0a6a1b74066354af03664ed5698cd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070492374022606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hxktk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4757a3-c4d5-4c8e-ac15-7d9f3c8ebdb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8343f3278191a5fe9db0fbb0e27f74b20292cb45126c31c3af8a228c34188622,PodSandboxId:5f9a47248bb2804a1e458f2c91c7c8beabd92f45f1571f24f260dc83aaa52c41,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070492368133339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b8defaa8b605f7dcd4e9f3ffe08a4a4ee104879f800e7355bdb73383edae35,PodSandboxId:c5093b23fedf3302304fbd04b9aff0367b8a5e1eefdefc1cdf766d11e6415f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070492344325095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qk6jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 87423f1a-80ea-45b8-aeab-44b6b062799c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bb360cd40c8400166e66232daa865bc11073c78767330ed2f2b07fbe12b73f,PodSandboxId:37403351bb5d6d2e8b88cdcec82568a68f37674afdab4567b9f29b01aadfc0c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNI
NG,CreatedAt:1724070489490321464,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4fb1002898663faa99a13b77c1e7536,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bb02685ac28e6d6d56e359a0023e01c1de7d5441ee2da0de542a75ecd603557,PodSandboxId:1ea7703be9320957879c51d4dc7d776a31b4735a19ed2ac946512a2eac0d0cff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,Cr
eatedAt:1724070489477757960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e8a5dfd1d791aa5334df20dbbc6f92,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba92943fb925e7199d3b162d29c34bba33cd3d44d92685e130aba0e9f16b5b9d,PodSandboxId:ad407ccb0028bc71790443ddbeb7fe166f39ea141bd92cb7c922b340b8616f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1
724070486747444214,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zn56f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f080cfb1-ca13-4c35-9cc2-3be1b4b937b8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830196092c1ae1954ec4b2bb84853f3e47e2577552ce7037daf1e5585465be9a,PodSandboxId:6f8e5db78f06ea80f174ccf28f62554309837168561fe8f0e3fbf92b5eb59ec3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070485744833886,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca70c57e73c2b5097176ac30e8268c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93e1e00ad0162919680e88c1d69b99b7172fb2f6a6b397fbb7f76a67eda83ab,PodSandboxId:30c17d332500ca434d746510173ed555b4d2515465ed811c7acc5cc9ccba43a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070480713999138
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825cbaf20f643ee2cc47f46b826a6055,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9598b04453d89cd6ce3d4dd6c7629286650a6a345ded2d6d2749b82784a2792d,PodSandboxId:c5093b23fedf3302304fbd04b9aff0367b8a5e1eefdefc1cdf766d11e6415f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070472359899990,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qk6jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87423f1a-80ea-45b8-aeab-44b6b062799c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb17d444ee72978731602c861495f7a423ded2cf54a025bacce5e0a1bbfe20e,PodSandboxId:d841a2bb66d62f727f639c91960a1d293d0a6a1b74066354af03664ed5698cd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070472323189964,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hxktk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4757a3-c4d5-4c8e-ac15-7d9f3c8ebdb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e7c7306d316b5c910ad016b8b46a5f06d031929386b7d42b7a4e7a1826abb73,PodSandboxId:4444b5bab728c8e060786cef3d1559a0e31b679aa79647a
7da31e86265334cd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070469048258396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zn56f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f080cfb1-ca13-4c35-9cc2-3be1b4b937b8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3fd2a7d5b94175873114e6d65a984db0c9862e59e98ef63616570a777fa2d1,PodSandboxId:183fc2d90f7607cc2941f86c2250236ca94d666807eae592bcc4d3aff1c1e0fd,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070469246238892,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303dc84dee1f747221a49b9301955e70afd9ab3e16ebf43124f540edb3a76f98,PodSandboxId:a9ec712790a1dfc0139d56263b61862da505cb60f1d5636afe440c72b6e71214,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070468857376580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4fb1002898663faa99a13b77c1e7536,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866f3a5c577ffd28fd6aaff38dc715c118ac4c36d86a8cbf80bde65a679cb1d2,PodSandboxId:f58682cabb54d3954a7c36ba1d0fdc1a1550939455260ff35b77bbc69ce5760c,Metadata:&ContainerMetadata{Name:etc
d,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070468727672448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825cbaf20f643ee2cc47f46b826a6055,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dd0720271fefccbb9aa1801c7467b67db245d398ab2aceb24a845682a9f6ff,PodSandboxId:ba7b8e1f69ad1d8d10e31cb7c334a2c123ee192c21910afe0ed9284373737683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Ima
ge:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070468435225476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca70c57e73c2b5097176ac30e8268c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da63d78797725d087a2cb1fecf380db6d2848c6fec3f15f6bf4ed1022dabf3b,PodSandboxId:270ee51a0921699d7155cf23d6e3b9780b2bcc29dbe992e437018cdf345b3cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070468337094693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e8a5dfd1d791aa5334df20dbbc6f92,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5722c624-963b-4ba3-beb2-d272563d7ef9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.126026249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db26a961-1350-479d-aba8-b974898b5272 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.126147950Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db26a961-1350-479d-aba8-b974898b5272 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.127376931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7debfea8-cc16-46f1-ba21-63d2dc83237e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.128025253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070496127989497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7debfea8-cc16-46f1-ba21-63d2dc83237e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.128766204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30fc3c8b-3905-4eca-b37f-3a0462ce848b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.128896827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30fc3c8b-3905-4eca-b37f-3a0462ce848b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.129361038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92742d9f8174689d965ddf312491c08e9026d6aad9247ba62d2c6c7c17e129ff,PodSandboxId:d841a2bb66d62f727f639c91960a1d293d0a6a1b74066354af03664ed5698cd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070492374022606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hxktk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4757a3-c4d5-4c8e-ac15-7d9f3c8ebdb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8343f3278191a5fe9db0fbb0e27f74b20292cb45126c31c3af8a228c34188622,PodSandboxId:5f9a47248bb2804a1e458f2c91c7c8beabd92f45f1571f24f260dc83aaa52c41,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070492368133339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b8defaa8b605f7dcd4e9f3ffe08a4a4ee104879f800e7355bdb73383edae35,PodSandboxId:c5093b23fedf3302304fbd04b9aff0367b8a5e1eefdefc1cdf766d11e6415f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070492344325095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qk6jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 87423f1a-80ea-45b8-aeab-44b6b062799c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bb360cd40c8400166e66232daa865bc11073c78767330ed2f2b07fbe12b73f,PodSandboxId:37403351bb5d6d2e8b88cdcec82568a68f37674afdab4567b9f29b01aadfc0c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNI
NG,CreatedAt:1724070489490321464,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4fb1002898663faa99a13b77c1e7536,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bb02685ac28e6d6d56e359a0023e01c1de7d5441ee2da0de542a75ecd603557,PodSandboxId:1ea7703be9320957879c51d4dc7d776a31b4735a19ed2ac946512a2eac0d0cff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,Cr
eatedAt:1724070489477757960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e8a5dfd1d791aa5334df20dbbc6f92,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba92943fb925e7199d3b162d29c34bba33cd3d44d92685e130aba0e9f16b5b9d,PodSandboxId:ad407ccb0028bc71790443ddbeb7fe166f39ea141bd92cb7c922b340b8616f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1
724070486747444214,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zn56f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f080cfb1-ca13-4c35-9cc2-3be1b4b937b8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830196092c1ae1954ec4b2bb84853f3e47e2577552ce7037daf1e5585465be9a,PodSandboxId:6f8e5db78f06ea80f174ccf28f62554309837168561fe8f0e3fbf92b5eb59ec3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070485744833886,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca70c57e73c2b5097176ac30e8268c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93e1e00ad0162919680e88c1d69b99b7172fb2f6a6b397fbb7f76a67eda83ab,PodSandboxId:30c17d332500ca434d746510173ed555b4d2515465ed811c7acc5cc9ccba43a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070480713999138
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825cbaf20f643ee2cc47f46b826a6055,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9598b04453d89cd6ce3d4dd6c7629286650a6a345ded2d6d2749b82784a2792d,PodSandboxId:c5093b23fedf3302304fbd04b9aff0367b8a5e1eefdefc1cdf766d11e6415f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070472359899990,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qk6jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87423f1a-80ea-45b8-aeab-44b6b062799c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb17d444ee72978731602c861495f7a423ded2cf54a025bacce5e0a1bbfe20e,PodSandboxId:d841a2bb66d62f727f639c91960a1d293d0a6a1b74066354af03664ed5698cd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070472323189964,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hxktk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4757a3-c4d5-4c8e-ac15-7d9f3c8ebdb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e7c7306d316b5c910ad016b8b46a5f06d031929386b7d42b7a4e7a1826abb73,PodSandboxId:4444b5bab728c8e060786cef3d1559a0e31b679aa79647a
7da31e86265334cd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070469048258396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zn56f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f080cfb1-ca13-4c35-9cc2-3be1b4b937b8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3fd2a7d5b94175873114e6d65a984db0c9862e59e98ef63616570a777fa2d1,PodSandboxId:183fc2d90f7607cc2941f86c2250236ca94d666807eae592bcc4d3aff1c1e0fd,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070469246238892,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303dc84dee1f747221a49b9301955e70afd9ab3e16ebf43124f540edb3a76f98,PodSandboxId:a9ec712790a1dfc0139d56263b61862da505cb60f1d5636afe440c72b6e71214,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070468857376580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4fb1002898663faa99a13b77c1e7536,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866f3a5c577ffd28fd6aaff38dc715c118ac4c36d86a8cbf80bde65a679cb1d2,PodSandboxId:f58682cabb54d3954a7c36ba1d0fdc1a1550939455260ff35b77bbc69ce5760c,Metadata:&ContainerMetadata{Name:etc
d,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070468727672448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825cbaf20f643ee2cc47f46b826a6055,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dd0720271fefccbb9aa1801c7467b67db245d398ab2aceb24a845682a9f6ff,PodSandboxId:ba7b8e1f69ad1d8d10e31cb7c334a2c123ee192c21910afe0ed9284373737683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Ima
ge:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070468435225476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca70c57e73c2b5097176ac30e8268c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da63d78797725d087a2cb1fecf380db6d2848c6fec3f15f6bf4ed1022dabf3b,PodSandboxId:270ee51a0921699d7155cf23d6e3b9780b2bcc29dbe992e437018cdf345b3cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070468337094693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e8a5dfd1d791aa5334df20dbbc6f92,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30fc3c8b-3905-4eca-b37f-3a0462ce848b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.172119782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a1d9650-b524-4ebb-91a3-d9c8ac718846 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.172245261Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a1d9650-b524-4ebb-91a3-d9c8ac718846 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.174201519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd52c8d6-7fb5-4af3-8ff8-2f7ace61b26c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.174956565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070496174759636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd52c8d6-7fb5-4af3-8ff8-2f7ace61b26c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.175731131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3cade984-17a6-4874-86b8-e419f714e3bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.175882636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3cade984-17a6-4874-86b8-e419f714e3bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:28:16 kubernetes-upgrade-814177 crio[3096]: time="2024-08-19 12:28:16.176451401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92742d9f8174689d965ddf312491c08e9026d6aad9247ba62d2c6c7c17e129ff,PodSandboxId:d841a2bb66d62f727f639c91960a1d293d0a6a1b74066354af03664ed5698cd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070492374022606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hxktk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4757a3-c4d5-4c8e-ac15-7d9f3c8ebdb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8343f3278191a5fe9db0fbb0e27f74b20292cb45126c31c3af8a228c34188622,PodSandboxId:5f9a47248bb2804a1e458f2c91c7c8beabd92f45f1571f24f260dc83aaa52c41,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724070492368133339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b8defaa8b605f7dcd4e9f3ffe08a4a4ee104879f800e7355bdb73383edae35,PodSandboxId:c5093b23fedf3302304fbd04b9aff0367b8a5e1eefdefc1cdf766d11e6415f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070492344325095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qk6jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 87423f1a-80ea-45b8-aeab-44b6b062799c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bb360cd40c8400166e66232daa865bc11073c78767330ed2f2b07fbe12b73f,PodSandboxId:37403351bb5d6d2e8b88cdcec82568a68f37674afdab4567b9f29b01aadfc0c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNI
NG,CreatedAt:1724070489490321464,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4fb1002898663faa99a13b77c1e7536,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bb02685ac28e6d6d56e359a0023e01c1de7d5441ee2da0de542a75ecd603557,PodSandboxId:1ea7703be9320957879c51d4dc7d776a31b4735a19ed2ac946512a2eac0d0cff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,Cr
eatedAt:1724070489477757960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e8a5dfd1d791aa5334df20dbbc6f92,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba92943fb925e7199d3b162d29c34bba33cd3d44d92685e130aba0e9f16b5b9d,PodSandboxId:ad407ccb0028bc71790443ddbeb7fe166f39ea141bd92cb7c922b340b8616f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1
724070486747444214,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zn56f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f080cfb1-ca13-4c35-9cc2-3be1b4b937b8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830196092c1ae1954ec4b2bb84853f3e47e2577552ce7037daf1e5585465be9a,PodSandboxId:6f8e5db78f06ea80f174ccf28f62554309837168561fe8f0e3fbf92b5eb59ec3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070485744833886,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca70c57e73c2b5097176ac30e8268c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93e1e00ad0162919680e88c1d69b99b7172fb2f6a6b397fbb7f76a67eda83ab,PodSandboxId:30c17d332500ca434d746510173ed555b4d2515465ed811c7acc5cc9ccba43a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070480713999138
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825cbaf20f643ee2cc47f46b826a6055,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9598b04453d89cd6ce3d4dd6c7629286650a6a345ded2d6d2749b82784a2792d,PodSandboxId:c5093b23fedf3302304fbd04b9aff0367b8a5e1eefdefc1cdf766d11e6415f6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070472359899990,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qk6jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87423f1a-80ea-45b8-aeab-44b6b062799c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb17d444ee72978731602c861495f7a423ded2cf54a025bacce5e0a1bbfe20e,PodSandboxId:d841a2bb66d62f727f639c91960a1d293d0a6a1b74066354af03664ed5698cd2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070472323189964,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hxktk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4757a3-c4d5-4c8e-ac15-7d9f3c8ebdb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e7c7306d316b5c910ad016b8b46a5f06d031929386b7d42b7a4e7a1826abb73,PodSandboxId:4444b5bab728c8e060786cef3d1559a0e31b679aa79647a
7da31e86265334cd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070469048258396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zn56f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f080cfb1-ca13-4c35-9cc2-3be1b4b937b8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3fd2a7d5b94175873114e6d65a984db0c9862e59e98ef63616570a777fa2d1,PodSandboxId:183fc2d90f7607cc2941f86c2250236ca94d666807eae592bcc4d3aff1c1e0fd,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724070469246238892,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303dc84dee1f747221a49b9301955e70afd9ab3e16ebf43124f540edb3a76f98,PodSandboxId:a9ec712790a1dfc0139d56263b61862da505cb60f1d5636afe440c72b6e71214,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070468857376580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4fb1002898663faa99a13b77c1e7536,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866f3a5c577ffd28fd6aaff38dc715c118ac4c36d86a8cbf80bde65a679cb1d2,PodSandboxId:f58682cabb54d3954a7c36ba1d0fdc1a1550939455260ff35b77bbc69ce5760c,Metadata:&ContainerMetadata{Name:etc
d,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070468727672448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825cbaf20f643ee2cc47f46b826a6055,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dd0720271fefccbb9aa1801c7467b67db245d398ab2aceb24a845682a9f6ff,PodSandboxId:ba7b8e1f69ad1d8d10e31cb7c334a2c123ee192c21910afe0ed9284373737683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Ima
ge:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070468435225476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca70c57e73c2b5097176ac30e8268c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da63d78797725d087a2cb1fecf380db6d2848c6fec3f15f6bf4ed1022dabf3b,PodSandboxId:270ee51a0921699d7155cf23d6e3b9780b2bcc29dbe992e437018cdf345b3cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070468337094693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-814177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e8a5dfd1d791aa5334df20dbbc6f92,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3cade984-17a6-4874-86b8-e419f714e3bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92742d9f81746       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   d841a2bb66d62       coredns-6f6b679f8f-hxktk
	8343f3278191a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   5f9a47248bb28       storage-provisioner
	e6b8defaa8b60       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   c5093b23fedf3       coredns-6f6b679f8f-qk6jw
	65bb360cd40c8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   6 seconds ago       Running             kube-scheduler            2                   37403351bb5d6       kube-scheduler-kubernetes-upgrade-814177
	4bb02685ac28e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   6 seconds ago       Running             kube-apiserver            2                   1ea7703be9320       kube-apiserver-kubernetes-upgrade-814177
	ba92943fb925e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 seconds ago       Running             kube-proxy                2                   ad407ccb0028b       kube-proxy-zn56f
	830196092c1ae       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   10 seconds ago      Running             kube-controller-manager   2                   6f8e5db78f06e       kube-controller-manager-kubernetes-upgrade-814177
	d93e1e00ad016       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 seconds ago      Running             etcd                      2                   30c17d332500c       etcd-kubernetes-upgrade-814177
	9598b04453d89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   23 seconds ago      Exited              coredns                   1                   c5093b23fedf3       coredns-6f6b679f8f-qk6jw
	5eb17d444ee72       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   23 seconds ago      Exited              coredns                   1                   d841a2bb66d62       coredns-6f6b679f8f-hxktk
	1c3fd2a7d5b94       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   27 seconds ago      Exited              storage-provisioner       2                   183fc2d90f760       storage-provisioner
	0e7c7306d316b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   27 seconds ago      Exited              kube-proxy                1                   4444b5bab728c       kube-proxy-zn56f
	303dc84dee1f7       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   27 seconds ago      Exited              kube-scheduler            1                   a9ec712790a1d       kube-scheduler-kubernetes-upgrade-814177
	866f3a5c577ff       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   27 seconds ago      Exited              etcd                      1                   f58682cabb54d       etcd-kubernetes-upgrade-814177
	45dd0720271fe       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   27 seconds ago      Exited              kube-controller-manager   1                   ba7b8e1f69ad1       kube-controller-manager-kubernetes-upgrade-814177
	0da63d7879772       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   27 seconds ago      Exited              kube-apiserver            1                   270ee51a09216       kube-apiserver-kubernetes-upgrade-814177
	
	
	==> coredns [5eb17d444ee72978731602c861495f7a423ded2cf54a025bacce5e0a1bbfe20e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [92742d9f8174689d965ddf312491c08e9026d6aad9247ba62d2c6c7c17e129ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9598b04453d89cd6ce3d4dd6c7629286650a6a345ded2d6d2749b82784a2792d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6b8defaa8b605f7dcd4e9f3ffe08a4a4ee104879f800e7355bdb73383edae35] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-814177
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-814177
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:27:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-814177
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:28:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:28:11 +0000   Mon, 19 Aug 2024 12:26:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:28:11 +0000   Mon, 19 Aug 2024 12:26:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:28:11 +0000   Mon, 19 Aug 2024 12:26:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:28:11 +0000   Mon, 19 Aug 2024 12:27:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.23
	  Hostname:    kubernetes-upgrade-814177
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 663d412013704af1822849e9ddd11451
	  System UUID:                663d4120-1370-4af1-8228-49e9ddd11451
	  Boot ID:                    7ac183f2-c959-488e-9520-36e7dec7abc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-hxktk                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     69s
	  kube-system                 coredns-6f6b679f8f-qk6jw                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     69s
	  kube-system                 etcd-kubernetes-upgrade-814177                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         75s
	  kube-system                 kube-apiserver-kubernetes-upgrade-814177             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-814177    200m (10%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-proxy-zn56f                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-scheduler-kubernetes-upgrade-814177             100m (5%)     0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node kubernetes-upgrade-814177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node kubernetes-upgrade-814177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)  kubelet          Node kubernetes-upgrade-814177 status is now: NodeHasSufficientPID
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           70s                node-controller  Node kubernetes-upgrade-814177 event: Registered Node kubernetes-upgrade-814177 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-814177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-814177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)    kubelet          Node kubernetes-upgrade-814177 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-814177 event: Registered Node kubernetes-upgrade-814177 in Controller
	
	
	==> dmesg <==
	[  +6.823994] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.077747] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079305] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.189812] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.166431] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.307751] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +5.337522] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.066537] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.819855] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[Aug19 12:27] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +0.102094] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.112113] kauditd_printk_skb: 18 callbacks suppressed
	[ +39.756160] systemd-fstab-generator[2268]: Ignoring "noauto" option for root device
	[  +0.090516] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.080534] systemd-fstab-generator[2280]: Ignoring "noauto" option for root device
	[  +0.432135] systemd-fstab-generator[2410]: Ignoring "noauto" option for root device
	[  +0.329913] systemd-fstab-generator[2546]: Ignoring "noauto" option for root device
	[  +1.085034] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +1.610572] systemd-fstab-generator[3269]: Ignoring "noauto" option for root device
	[  +5.388830] kauditd_printk_skb: 301 callbacks suppressed
	[Aug19 12:28] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.008526] systemd-fstab-generator[4181]: Ignoring "noauto" option for root device
	[  +0.086595] kauditd_printk_skb: 1 callbacks suppressed
	[  +4.760344] systemd-fstab-generator[4564]: Ignoring "noauto" option for root device
	[  +1.712422] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [866f3a5c577ffd28fd6aaff38dc715c118ac4c36d86a8cbf80bde65a679cb1d2] <==
	{"level":"warn","ts":"2024-08-19T12:27:49.415884Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-19T12:27:49.416083Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.23:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.23:2380","--initial-cluster=kubernetes-upgrade-814177=https://192.168.50.23:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.23:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.23:2380","--name=kubernetes-upgrade-814177","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot
-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-08-19T12:27:49.416159Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-08-19T12:27:49.416182Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-19T12:27:49.416193Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.23:2380"]}
	{"level":"info","ts":"2024-08-19T12:27:49.416226Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:27:49.417673Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.23:2379"]}
	{"level":"info","ts":"2024-08-19T12:27:49.417848Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-814177","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.23:2380"],"listen-peer-urls":["https://192.168.50.23:2380"],"advertise-client-urls":["https://192.168.50.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","i
nitial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-08-19T12:27:49.445853Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"26.27917ms"}
	{"level":"info","ts":"2024-08-19T12:27:49.490008Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-19T12:27:49.524983Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"639be5bb85f82108","local-member-id":"6311727a8df181c7","commit-index":434}
	{"level":"info","ts":"2024-08-19T12:27:49.525103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-19T12:27:49.525167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 became follower at term 2"}
	{"level":"info","ts":"2024-08-19T12:27:49.525200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6311727a8df181c7 [peers: [], term: 2, commit: 434, applied: 0, lastindex: 434, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-19T12:27:49.662589Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-19T12:27:49.786810Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":419}
	
	
	==> etcd [d93e1e00ad0162919680e88c1d69b99b7172fb2f6a6b397fbb7f76a67eda83ab] <==
	{"level":"info","ts":"2024-08-19T12:28:00.852319Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:28:00.852634Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6311727a8df181c7","initial-advertise-peer-urls":["https://192.168.50.23:2380"],"listen-peer-urls":["https://192.168.50.23:2380"],"advertise-client-urls":["https://192.168.50.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T12:28:00.852670Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T12:28:00.852842Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.23:2380"}
	{"level":"info","ts":"2024-08-19T12:28:00.852862Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.23:2380"}
	{"level":"info","ts":"2024-08-19T12:28:01.834877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T12:28:01.834920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:28:01.834945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 received MsgPreVoteResp from 6311727a8df181c7 at term 2"}
	{"level":"info","ts":"2024-08-19T12:28:01.834959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:28:01.834965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 received MsgVoteResp from 6311727a8df181c7 at term 3"}
	{"level":"info","ts":"2024-08-19T12:28:01.834974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:28:01.834981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6311727a8df181c7 elected leader 6311727a8df181c7 at term 3"}
	{"level":"info","ts":"2024-08-19T12:28:01.840921Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6311727a8df181c7","local-member-attributes":"{Name:kubernetes-upgrade-814177 ClientURLs:[https://192.168.50.23:2379]}","request-path":"/0/members/6311727a8df181c7/attributes","cluster-id":"639be5bb85f82108","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:28:01.840941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:28:01.841186Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:28:01.841625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:28:01.841647Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:28:01.842306Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:28:01.842418Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:28:01.843585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:28:01.843595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.23:2379"}
	{"level":"warn","ts":"2024-08-19T12:28:14.512370Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.702861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/statefulset-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-08-19T12:28:14.512501Z","caller":"traceutil/trace.go:171","msg":"trace[1034914914] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/statefulset-controller; range_end:; response_count:1; response_revision:481; }","duration":"146.848074ms","start":"2024-08-19T12:28:14.365636Z","end":"2024-08-19T12:28:14.512484Z","steps":["trace[1034914914] 'range keys from in-memory index tree'  (duration: 146.604627ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:28:14.512462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.140045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2024-08-19T12:28:14.512722Z","caller":"traceutil/trace.go:171","msg":"trace[940606759] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:481; }","duration":"121.365954ms","start":"2024-08-19T12:28:14.391300Z","end":"2024-08-19T12:28:14.512666Z","steps":["trace[940606759] 'range keys from in-memory index tree'  (duration: 121.025064ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:28:16 up 1 min,  0 users,  load average: 0.77, 0.23, 0.08
	Linux kubernetes-upgrade-814177 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0da63d78797725d087a2cb1fecf380db6d2848c6fec3f15f6bf4ed1022dabf3b] <==
	I0819 12:27:49.110756       1 options.go:228] external host was not specified, using 192.168.50.23
	I0819 12:27:49.126137       1 server.go:142] Version: v1.31.0
	I0819 12:27:49.135190       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0819 12:27:50.355613       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:27:50.355739       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 12:27:50.355847       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 12:27:50.364876       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 12:27:50.364897       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 12:27:50.365079       1 instance.go:232] Using reconciler: lease
	I0819 12:27:50.365778       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0819 12:27:50.366723       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [4bb02685ac28e6d6d56e359a0023e01c1de7d5441ee2da0de542a75ecd603557] <==
	I0819 12:28:11.379033       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:28:11.398759       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:28:11.398851       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:28:11.398889       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:28:11.398894       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:28:11.398929       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:28:11.399209       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:28:11.400948       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:28:11.400982       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:28:11.400988       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:28:11.400993       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:28:11.404169       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 12:28:11.431859       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:28:11.445095       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 12:28:11.445130       1 policy_source.go:224] refreshing policies
	E0819 12:28:11.453987       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 12:28:11.520185       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:28:12.280351       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:28:13.377632       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:28:13.395462       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:28:13.440744       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:28:13.480694       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:28:13.493083       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:28:14.789323       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:28:15.321574       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [45dd0720271fefccbb9aa1801c7467b67db245d398ab2aceb24a845682a9f6ff] <==
	
	
	==> kube-controller-manager [830196092c1ae1954ec4b2bb84853f3e47e2577552ce7037daf1e5585465be9a] <==
	I0819 12:28:15.023743       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-814177"
	I0819 12:28:15.046030       1 shared_informer.go:320] Caches are synced for HPA
	I0819 12:28:15.058977       1 shared_informer.go:320] Caches are synced for deployment
	I0819 12:28:15.116984       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0819 12:28:15.119956       1 shared_informer.go:320] Caches are synced for endpoint
	I0819 12:28:15.121359       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0819 12:28:15.121457       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0819 12:28:15.121754       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0819 12:28:15.121862       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0819 12:28:15.136186       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0819 12:28:15.144069       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0819 12:28:15.171626       1 shared_informer.go:320] Caches are synced for disruption
	I0819 12:28:15.178888       1 shared_informer.go:320] Caches are synced for taint
	I0819 12:28:15.179461       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 12:28:15.180276       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-814177"
	I0819 12:28:15.182168       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 12:28:15.199026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="228.159377ms"
	I0819 12:28:15.199178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="80.008µs"
	I0819 12:28:15.234734       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:28:15.235026       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:28:15.688446       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 12:28:15.705190       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 12:28:15.705225       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 12:28:16.484385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="22.389613ms"
	I0819 12:28:16.485635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="62.684µs"
	
	
	==> kube-proxy [0e7c7306d316b5c910ad016b8b46a5f06d031929386b7d42b7a4e7a1826abb73] <==
	
	
	==> kube-proxy [ba92943fb925e7199d3b162d29c34bba33cd3d44d92685e130aba0e9f16b5b9d] <==
	 >
	E0819 12:28:06.898263       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:28:06.900674       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-814177\": dial tcp 192.168.50.23:8443: connect: connection refused"
	E0819 12:28:08.025546       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-814177\": dial tcp 192.168.50.23:8443: connect: connection refused"
	I0819 12:28:11.390075       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.23"]
	E0819 12:28:11.390146       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:28:11.462092       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:28:11.462255       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:28:11.462333       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:28:11.464884       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:28:11.465264       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:28:11.465321       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:28:11.466523       1 config.go:197] "Starting service config controller"
	I0819 12:28:11.466578       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:28:11.466618       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:28:11.466634       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:28:11.467123       1 config.go:326] "Starting node config controller"
	I0819 12:28:11.467630       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:28:11.567413       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:28:11.567616       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:28:11.567829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [303dc84dee1f747221a49b9301955e70afd9ab3e16ebf43124f540edb3a76f98] <==
	
	
	==> kube-scheduler [65bb360cd40c8400166e66232daa865bc11073c78767330ed2f2b07fbe12b73f] <==
	I0819 12:28:10.074771       1 serving.go:386] Generated self-signed cert in-memory
	W0819 12:28:11.339461       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:28:11.339533       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:28:11.339544       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:28:11.339551       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:28:11.377006       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 12:28:11.377060       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:28:11.381346       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 12:28:11.381528       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:28:11.381558       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:28:11.381603       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 12:28:11.482167       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.271712    4188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca70c57e73c2b5097176ac30e8268c9b-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-814177\" (UID: \"ca70c57e73c2b5097176ac30e8268c9b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-814177"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.271728    4188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca70c57e73c2b5097176ac30e8268c9b-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-814177\" (UID: \"ca70c57e73c2b5097176ac30e8268c9b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-814177"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.271745    4188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca70c57e73c2b5097176ac30e8268c9b-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-814177\" (UID: \"ca70c57e73c2b5097176ac30e8268c9b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-814177"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.271761    4188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4fb1002898663faa99a13b77c1e7536-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-814177\" (UID: \"d4fb1002898663faa99a13b77c1e7536\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-814177"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.271775    4188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/825cbaf20f643ee2cc47f46b826a6055-etcd-data\") pod \"etcd-kubernetes-upgrade-814177\" (UID: \"825cbaf20f643ee2cc47f46b826a6055\") " pod="kube-system/etcd-kubernetes-upgrade-814177"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.424964    4188 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-814177"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: E0819 12:28:09.425939    4188 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.23:8443: connect: connection refused" node="kubernetes-upgrade-814177"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.466233    4188 scope.go:117] "RemoveContainer" containerID="0da63d78797725d087a2cb1fecf380db6d2848c6fec3f15f6bf4ed1022dabf3b"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.471948    4188 scope.go:117] "RemoveContainer" containerID="303dc84dee1f747221a49b9301955e70afd9ab3e16ebf43124f540edb3a76f98"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: E0819 12:28:09.630433    4188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-814177?timeout=10s\": dial tcp 192.168.50.23:8443: connect: connection refused" interval="800ms"
	Aug 19 12:28:09 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:09.827387    4188 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-814177"
	Aug 19 12:28:11 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:11.544924    4188 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-814177"
	Aug 19 12:28:11 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:11.545311    4188 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-814177"
	Aug 19 12:28:11 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:11.545377    4188 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 12:28:11 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:11.546422    4188 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 12:28:11 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:11.991871    4188 apiserver.go:52] "Watching apiserver"
	Aug 19 12:28:12 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:12.029703    4188 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 12:28:12 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:12.047673    4188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f080cfb1-ca13-4c35-9cc2-3be1b4b937b8-lib-modules\") pod \"kube-proxy-zn56f\" (UID: \"f080cfb1-ca13-4c35-9cc2-3be1b4b937b8\") " pod="kube-system/kube-proxy-zn56f"
	Aug 19 12:28:12 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:12.047766    4188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d-tmp\") pod \"storage-provisioner\" (UID: \"f2d7ac6f-57d8-4dcb-ac0e-47f659c9bd7d\") " pod="kube-system/storage-provisioner"
	Aug 19 12:28:12 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:12.047867    4188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f080cfb1-ca13-4c35-9cc2-3be1b4b937b8-xtables-lock\") pod \"kube-proxy-zn56f\" (UID: \"f080cfb1-ca13-4c35-9cc2-3be1b4b937b8\") " pod="kube-system/kube-proxy-zn56f"
	Aug 19 12:28:12 kubernetes-upgrade-814177 kubelet[4188]: E0819 12:28:12.210434    4188 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-814177\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-814177"
	Aug 19 12:28:12 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:12.300120    4188 scope.go:117] "RemoveContainer" containerID="9598b04453d89cd6ce3d4dd6c7629286650a6a345ded2d6d2749b82784a2792d"
	Aug 19 12:28:12 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:12.309153    4188 scope.go:117] "RemoveContainer" containerID="5eb17d444ee72978731602c861495f7a423ded2cf54a025bacce5e0a1bbfe20e"
	Aug 19 12:28:12 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:12.309449    4188 scope.go:117] "RemoveContainer" containerID="1c3fd2a7d5b94175873114e6d65a984db0c9862e59e98ef63616570a777fa2d1"
	Aug 19 12:28:16 kubernetes-upgrade-814177 kubelet[4188]: I0819 12:28:16.443808    4188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [1c3fd2a7d5b94175873114e6d65a984db0c9862e59e98ef63616570a777fa2d1] <==
	
	
	==> storage-provisioner [8343f3278191a5fe9db0fbb0e27f74b20292cb45126c31c3af8a228c34188622] <==
	I0819 12:28:12.567952       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 12:28:12.584213       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 12:28:12.584255       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:28:15.617011  163368 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19476-99410/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-814177 -n kubernetes-upgrade-814177
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-814177 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-814177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-814177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-814177: (1.489615662s)
--- FAIL: TestKubernetesUpgrade (421.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-732494 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-732494 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.116195029s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-732494] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-732494" primary control-plane node in "pause-732494" cluster
	* Updating the running kvm2 "pause-732494" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-732494" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:22:21.221221  152005 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:22:21.221601  152005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:22:21.221624  152005 out.go:358] Setting ErrFile to fd 2...
	I0819 12:22:21.221635  152005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:22:21.221951  152005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:22:21.222748  152005 out.go:352] Setting JSON to false
	I0819 12:22:21.224269  152005 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7487,"bootTime":1724062654,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:22:21.224368  152005 start.go:139] virtualization: kvm guest
	I0819 12:22:21.226725  152005 out.go:177] * [pause-732494] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:22:21.228194  152005 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:22:21.228250  152005 notify.go:220] Checking for updates...
	I0819 12:22:21.229744  152005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:22:21.231610  152005 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 12:22:21.233158  152005 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:22:21.234450  152005 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:22:21.235859  152005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:22:21.237670  152005 config.go:182] Loaded profile config "pause-732494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:22:21.238103  152005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:22:21.238169  152005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:22:21.255833  152005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0819 12:22:21.256421  152005 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:22:21.257114  152005 main.go:141] libmachine: Using API Version  1
	I0819 12:22:21.257147  152005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:22:21.257628  152005 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:22:21.257844  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:21.258137  152005 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:22:21.258625  152005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:22:21.258667  152005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:22:21.277092  152005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0819 12:22:21.277605  152005 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:22:21.278271  152005 main.go:141] libmachine: Using API Version  1
	I0819 12:22:21.278295  152005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:22:21.278695  152005 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:22:21.278914  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:21.321115  152005 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 12:22:21.322653  152005 start.go:297] selected driver: kvm2
	I0819 12:22:21.322684  152005 start.go:901] validating driver "kvm2" against &{Name:pause-732494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:22:21.322865  152005 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:22:21.323334  152005 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:22:21.323457  152005 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:22:21.341454  152005 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:22:21.342410  152005 cni.go:84] Creating CNI manager for ""
	I0819 12:22:21.342426  152005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:22:21.342492  152005 start.go:340] cluster config:
	{Name:pause-732494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-732494 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:22:21.342669  152005 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:22:21.344601  152005 out.go:177] * Starting "pause-732494" primary control-plane node in "pause-732494" cluster
	I0819 12:22:21.345913  152005 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:22:21.345953  152005 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 12:22:21.345960  152005 cache.go:56] Caching tarball of preloaded images
	I0819 12:22:21.346060  152005 preload.go:172] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:22:21.346072  152005 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 12:22:21.346183  152005 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/config.json ...
	I0819 12:22:21.346375  152005 start.go:360] acquireMachinesLock for pause-732494: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:22:31.520454  152005 start.go:364] duration metric: took 10.174028607s to acquireMachinesLock for "pause-732494"
	I0819 12:22:31.520524  152005 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:22:31.520531  152005 fix.go:54] fixHost starting: 
	I0819 12:22:31.520982  152005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:22:31.521034  152005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:22:31.539040  152005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0819 12:22:31.539509  152005 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:22:31.540130  152005 main.go:141] libmachine: Using API Version  1
	I0819 12:22:31.540161  152005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:22:31.540527  152005 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:22:31.540752  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:31.540937  152005 main.go:141] libmachine: (pause-732494) Calling .GetState
	I0819 12:22:31.542807  152005 fix.go:112] recreateIfNeeded on pause-732494: state=Running err=<nil>
	W0819 12:22:31.542843  152005 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:22:31.544605  152005 out.go:177] * Updating the running kvm2 "pause-732494" VM ...
	I0819 12:22:31.546005  152005 machine.go:93] provisionDockerMachine start ...
	I0819 12:22:31.546036  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:31.546450  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.549220  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.549640  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.549670  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.549836  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.550023  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.550181  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.550335  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.550486  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.550718  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.550730  152005 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:22:31.651775  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-732494
	
	I0819 12:22:31.651812  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.652054  152005 buildroot.go:166] provisioning hostname "pause-732494"
	I0819 12:22:31.652087  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.652258  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.655130  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.655458  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.655495  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.655664  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.655870  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.656026  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.656159  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.656335  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.656578  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.656597  152005 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-732494 && echo "pause-732494" | sudo tee /etc/hostname
	I0819 12:22:31.772723  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-732494
	
	I0819 12:22:31.772758  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.775428  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.775831  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.775874  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.776095  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.776303  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.776471  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.776596  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.776797  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.777031  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.777058  152005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-732494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-732494/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-732494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:22:31.885199  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:22:31.885244  152005 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:22:31.885294  152005 buildroot.go:174] setting up certificates
	I0819 12:22:31.885311  152005 provision.go:84] configureAuth start
	I0819 12:22:31.885327  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.885632  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:31.888635  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.889065  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.889095  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.889187  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.892060  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.892525  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.892555  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.892706  152005 provision.go:143] copyHostCerts
	I0819 12:22:31.892778  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:22:31.892796  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:22:31.892870  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:22:31.893002  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:22:31.893013  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:22:31.893042  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:22:31.893147  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:22:31.893163  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:22:31.893191  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:22:31.893277  152005 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.pause-732494 san=[127.0.0.1 192.168.39.24 localhost minikube pause-732494]
	I0819 12:22:32.197776  152005 provision.go:177] copyRemoteCerts
	I0819 12:22:32.197836  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:22:32.197862  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:32.200913  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.201260  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:32.201305  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.201443  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:32.201726  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.202016  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:32.202206  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:32.281680  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:22:32.308734  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:22:32.337096  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:22:32.369669  152005 provision.go:87] duration metric: took 484.343284ms to configureAuth
	I0819 12:22:32.369704  152005 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:22:32.369983  152005 config.go:182] Loaded profile config "pause-732494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:22:32.370079  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:32.372952  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.373295  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:32.373323  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.373530  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:32.373766  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.373971  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.374187  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:32.374386  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:32.374641  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:32.374667  152005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:22:37.874249  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:22:37.874286  152005 machine.go:96] duration metric: took 6.328260286s to provisionDockerMachine
	I0819 12:22:37.874305  152005 start.go:293] postStartSetup for "pause-732494" (driver="kvm2")
	I0819 12:22:37.874327  152005 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:22:37.874357  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:37.874822  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:22:37.874853  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:37.878095  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:37.878564  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:37.878590  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:37.878780  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:37.878991  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:37.879159  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:37.879310  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:37.959139  152005 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:22:37.963530  152005 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:22:37.963575  152005 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:22:37.963662  152005 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:22:37.963784  152005 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:22:37.963908  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:22:37.973387  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:22:37.997255  152005 start.go:296] duration metric: took 122.932444ms for postStartSetup
	I0819 12:22:37.997302  152005 fix.go:56] duration metric: took 6.47677026s for fixHost
	I0819 12:22:37.997324  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.000043  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.000434  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.000464  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.000645  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.000845  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.001023  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.001221  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.001377  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:38.001610  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:38.001627  152005 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:22:38.100660  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724070158.089732951
	
	I0819 12:22:38.100685  152005 fix.go:216] guest clock: 1724070158.089732951
	I0819 12:22:38.100694  152005 fix.go:229] Guest: 2024-08-19 12:22:38.089732951 +0000 UTC Remote: 2024-08-19 12:22:37.997306217 +0000 UTC m=+16.826858779 (delta=92.426734ms)
	I0819 12:22:38.100740  152005 fix.go:200] guest clock delta is within tolerance: 92.426734ms
	I0819 12:22:38.100747  152005 start.go:83] releasing machines lock for "pause-732494", held for 6.580248803s
	I0819 12:22:38.100776  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.101106  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:38.103703  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.104187  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.104221  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.104404  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105033  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105240  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105344  152005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:22:38.105402  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.105438  152005 ssh_runner.go:195] Run: cat /version.json
	I0819 12:22:38.105461  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.108213  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108474  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108533  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.108562  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108718  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.108855  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.108877  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108922  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.109024  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.109093  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.109194  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.109283  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:38.109321  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.109440  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:38.185742  152005 ssh_runner.go:195] Run: systemctl --version
	I0819 12:22:38.206055  152005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:22:38.361478  152005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:22:38.367061  152005 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:22:38.367127  152005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:22:38.377122  152005 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:22:38.377164  152005 start.go:495] detecting cgroup driver to use...
	I0819 12:22:38.377244  152005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:22:38.393342  152005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:22:38.413251  152005 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:22:38.413325  152005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:22:38.429264  152005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:22:38.447906  152005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:22:38.584508  152005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:22:38.718505  152005 docker.go:233] disabling docker service ...
	I0819 12:22:38.718594  152005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:22:38.735006  152005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:22:38.748877  152005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:22:38.889235  152005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:22:39.021023  152005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:22:39.036456  152005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:22:39.056923  152005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:22:39.057001  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.068804  152005 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:22:39.068892  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.079491  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.091040  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.101810  152005 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:22:39.112971  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.123661  152005 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.136119  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.146744  152005 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:22:39.156992  152005 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:22:39.167623  152005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:22:39.307935  152005 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:22:39.697080  152005 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:22:39.697161  152005 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:22:39.708704  152005 start.go:563] Will wait 60s for crictl version
	I0819 12:22:39.708783  152005 ssh_runner.go:195] Run: which crictl
	I0819 12:22:39.724808  152005 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:22:39.912391  152005 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:22:39.912538  152005 ssh_runner.go:195] Run: crio --version
	I0819 12:22:40.192674  152005 ssh_runner.go:195] Run: crio --version
	I0819 12:22:40.415606  152005 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:22:40.416872  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:40.420351  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:40.420702  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:40.420732  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:40.420980  152005 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:22:40.440755  152005 kubeadm.go:883] updating cluster {Name:pause-732494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:22:40.440940  152005 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:22:40.441009  152005 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:22:40.564470  152005 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:22:40.564505  152005 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:22:40.564571  152005 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:22:40.670891  152005 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:22:40.670915  152005 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:22:40.670923  152005 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.31.0 crio true true} ...
	I0819 12:22:40.671031  152005 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-732494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:22:40.671111  152005 ssh_runner.go:195] Run: crio config
	I0819 12:22:40.751848  152005 cni.go:84] Creating CNI manager for ""
	I0819 12:22:40.751875  152005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:22:40.751889  152005 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:22:40.751921  152005 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-732494 NodeName:pause-732494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:22:40.752093  152005 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-732494"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:22:40.752168  152005 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:22:40.764821  152005 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:22:40.764907  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:22:40.778039  152005 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 12:22:40.801052  152005 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:22:40.826516  152005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 12:22:40.860431  152005 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0819 12:22:40.867074  152005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:22:41.153303  152005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:22:41.178057  152005 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494 for IP: 192.168.39.24
	I0819 12:22:41.178082  152005 certs.go:194] generating shared ca certs ...
	I0819 12:22:41.178103  152005 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:41.178290  152005 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:22:41.178351  152005 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:22:41.178368  152005 certs.go:256] generating profile certs ...
	I0819 12:22:41.178484  152005 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/client.key
	I0819 12:22:41.178565  152005 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.key.96bc570c
	I0819 12:22:41.178616  152005 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.key
	I0819 12:22:41.178769  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:22:41.178814  152005 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:22:41.178828  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:22:41.178862  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:22:41.178898  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:22:41.178931  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:22:41.178987  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:22:41.179867  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:22:41.215204  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:22:41.258684  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:22:41.305321  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:22:41.332940  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 12:22:41.365978  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:22:41.393037  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:22:41.419348  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:22:41.450462  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:22:41.479646  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:22:41.514370  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:22:41.546927  152005 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:22:41.572743  152005 ssh_runner.go:195] Run: openssl version
	I0819 12:22:41.578664  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:22:41.590058  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.596800  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.596868  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.602769  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:22:41.618424  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:22:41.631176  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.636071  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.636154  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.641886  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:22:41.652357  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:22:41.664754  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.669272  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.669356  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.675323  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:22:41.685164  152005 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:22:41.689760  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:22:41.695350  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:22:41.701889  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:22:41.707860  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:22:41.713486  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:22:41.719094  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:22:41.726907  152005 kubeadm.go:392] StartCluster: {Name:pause-732494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:22:41.727083  152005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:22:41.727145  152005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:22:41.813855  152005 cri.go:89] found id: "f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d"
	I0819 12:22:41.813881  152005 cri.go:89] found id: "d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d"
	I0819 12:22:41.813888  152005 cri.go:89] found id: "e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60"
	I0819 12:22:41.813892  152005 cri.go:89] found id: "0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59"
	I0819 12:22:41.813897  152005 cri.go:89] found id: "641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a"
	I0819 12:22:41.813902  152005 cri.go:89] found id: "c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612"
	I0819 12:22:41.813906  152005 cri.go:89] found id: "e5ae2af15481ac5157a34eeb2c75068066569366d3e36ae802b0422fcb487d5f"
	I0819 12:22:41.813912  152005 cri.go:89] found id: "71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20"
	I0819 12:22:41.813917  152005 cri.go:89] found id: "65c4c92bbc54f2f7bf61f448bcfdc2da3729dc17648d995c3ff9d60dfa47695e"
	I0819 12:22:41.813925  152005 cri.go:89] found id: "9962cad310005442266bfd1020886340d9bf21d8ef66a09a62236768f681bb7d"
	I0819 12:22:41.813930  152005 cri.go:89] found id: "6edfdd09f92c9948c43a5a94428c6cf6442587d0219b11057bbc6093ee3c00ff"
	I0819 12:22:41.813934  152005 cri.go:89] found id: "b48be31cf132a32ee25c0f7f53defa53fb04dd4b0288a1d3859f0d01a25b03ce"
	I0819 12:22:41.813939  152005 cri.go:89] found id: "2bb9efcb02cd6e18c239d0ebd154a76fef9d91391a48cc48f377111051f3f1e2"
	I0819 12:22:41.813946  152005 cri.go:89] found id: ""
	I0819 12:22:41.814000  152005 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-732494 -n pause-732494
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-732494 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-732494 logs -n 25: (1.363337777s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-787042 sudo crio            | cilium-787042             | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-787042                      | cilium-787042             | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC | 19 Aug 24 12:18 UTC |
	| start   | -p force-systemd-flag-557690          | force-systemd-flag-557690 | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC | 19 Aug 24 12:20 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-320395                | offline-crio-320395       | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:19 UTC |
	| start   | -p cert-expiration-497658             | cert-expiration-497658    | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:20 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:20 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-357956             | running-upgrade-357956    | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:21 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-557690 ssh cat     | force-systemd-flag-557690 | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:20 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-557690          | force-systemd-flag-557690 | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:20 UTC |
	| start   | -p cert-options-294561                | cert-options-294561       | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:21 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:20 UTC |
	| start   | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:21 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-357956             | running-upgrade-357956    | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	| start   | -p pause-732494 --memory=2048         | pause-732494              | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:22 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-294561 ssh               | cert-options-294561       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-294561 -- sudo        | cert-options-294561       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-294561                | cert-options-294561       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	| start   | -p kubernetes-upgrade-814177          | kubernetes-upgrade-814177 | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-340370 sudo           | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	| start   | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:22 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-732494                       | pause-732494              | jenkins | v1.33.1 | 19 Aug 24 12:22 UTC | 19 Aug 24 12:23 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-340370 sudo           | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:22 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:22 UTC | 19 Aug 24 12:22 UTC |
	| start   | -p stopped-upgrade-111717             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 12:22 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:22:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:22:34.612118  152185 out.go:296] Setting OutFile to fd 1 ...
	I0819 12:22:34.612233  152185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0819 12:22:34.612236  152185 out.go:309] Setting ErrFile to fd 2...
	I0819 12:22:34.612240  152185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0819 12:22:34.612694  152185 root.go:329] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:22:34.612978  152185 out.go:303] Setting JSON to false
	I0819 12:22:34.613938  152185 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7501,"bootTime":1724062654,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:22:34.613999  152185 start.go:125] virtualization: kvm guest
	I0819 12:22:34.616364  152185 out.go:177] * [stopped-upgrade-111717] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:22:34.617694  152185 notify.go:193] Checking for updates...
	I0819 12:22:34.618864  152185 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:22:34.620034  152185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:22:34.621571  152185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:22:34.622904  152185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:22:34.624372  152185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:22:34.625821  152185 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig1993458851
	I0819 12:22:34.627525  152185 config.go:178] Loaded profile config "cert-expiration-497658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:22:34.627616  152185 config.go:178] Loaded profile config "kubernetes-upgrade-814177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 12:22:34.627749  152185 config.go:178] Loaded profile config "pause-732494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:22:34.627805  152185 driver.go:360] Setting default libvirt URI to qemu:///system
	I0819 12:22:34.665283  152185 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 12:22:34.666327  152185 start.go:284] selected driver: kvm2
	I0819 12:22:34.666335  152185 start.go:805] validating driver "kvm2" against <nil>
	I0819 12:22:34.666352  152185 start.go:816] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:22:34.667093  152185 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:22:34.667285  152185 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:22:34.683163  152185 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:22:34.683236  152185 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0819 12:22:34.683453  152185 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 12:22:34.683499  152185 cni.go:95] Creating CNI manager for ""
	I0819 12:22:34.683510  152185 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0819 12:22:34.683516  152185 start_flags.go:305] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 12:22:34.683525  152185 start_flags.go:310] config:
	{Name:stopped-upgrade-111717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-111717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0819 12:22:34.683639  152185 iso.go:128] acquiring lock: {Name:mk0a8ef9bbe457d4d7a65de8e0862a7215eaca7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:22:34.685765  152185 out.go:177] * Starting control plane node stopped-upgrade-111717 in cluster stopped-upgrade-111717
	I0819 12:22:31.546005  152005 machine.go:93] provisionDockerMachine start ...
	I0819 12:22:31.546036  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:31.546450  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.549220  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.549640  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.549670  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.549836  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.550023  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.550181  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.550335  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.550486  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.550718  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.550730  152005 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:22:31.651775  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-732494
	
	I0819 12:22:31.651812  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.652054  152005 buildroot.go:166] provisioning hostname "pause-732494"
	I0819 12:22:31.652087  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.652258  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.655130  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.655458  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.655495  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.655664  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.655870  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.656026  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.656159  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.656335  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.656578  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.656597  152005 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-732494 && echo "pause-732494" | sudo tee /etc/hostname
	I0819 12:22:31.772723  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-732494
	
	I0819 12:22:31.772758  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.775428  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.775831  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.775874  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.776095  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.776303  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.776471  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.776596  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.776797  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.777031  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.777058  152005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-732494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-732494/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-732494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:22:31.885199  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:22:31.885244  152005 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:22:31.885294  152005 buildroot.go:174] setting up certificates
	I0819 12:22:31.885311  152005 provision.go:84] configureAuth start
	I0819 12:22:31.885327  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.885632  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:31.888635  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.889065  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.889095  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.889187  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.892060  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.892525  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.892555  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.892706  152005 provision.go:143] copyHostCerts
	I0819 12:22:31.892778  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:22:31.892796  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:22:31.892870  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:22:31.893002  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:22:31.893013  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:22:31.893042  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:22:31.893147  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:22:31.893163  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:22:31.893191  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:22:31.893277  152005 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.pause-732494 san=[127.0.0.1 192.168.39.24 localhost minikube pause-732494]
	I0819 12:22:32.197776  152005 provision.go:177] copyRemoteCerts
	I0819 12:22:32.197836  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:22:32.197862  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:32.200913  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.201260  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:32.201305  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.201443  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:32.201726  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.202016  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:32.202206  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:32.281680  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:22:32.308734  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:22:32.337096  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:22:32.369669  152005 provision.go:87] duration metric: took 484.343284ms to configureAuth
	I0819 12:22:32.369704  152005 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:22:32.369983  152005 config.go:182] Loaded profile config "pause-732494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:22:32.370079  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:32.372952  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.373295  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:32.373323  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.373530  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:32.373766  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.373971  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.374187  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:32.374386  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:32.374641  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:32.374667  152005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:22:34.687233  152185 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0819 12:22:34.687269  152185 preload.go:148] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0819 12:22:34.687276  152185 cache.go:57] Caching tarball of preloaded images
	I0819 12:22:34.687994  152185 preload.go:174] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:22:34.688017  152185 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.1 on crio
	I0819 12:22:34.688845  152185 profile.go:148] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/stopped-upgrade-111717/config.json ...
	I0819 12:22:34.688871  152185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/stopped-upgrade-111717/config.json: {Name:mka2933c60a70e49712a675dc62663660862c4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:34.689041  152185 cache.go:208] Successfully downloaded all kic artifacts
	I0819 12:22:34.689076  152185 start.go:352] acquiring machines lock for stopped-upgrade-111717: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:22:38.100908  152185 start.go:356] acquired machines lock for "stopped-upgrade-111717" in 3.411810582s
	I0819 12:22:38.100971  152185 start.go:91] Provisioning new machine with config: &{Name:stopped-upgrade-111717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopp
ed-upgrade-111717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:22:38.101104  152185 start.go:131] createHost starting for "" (driver="kvm2")
	I0819 12:22:38.103838  152185 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:22:38.104026  152185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:22:38.104077  152185 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0819 12:22:38.120996  152185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0819 12:22:38.121466  152185 main.go:134] libmachine: () Calling .GetVersion
	I0819 12:22:38.122212  152185 main.go:134] libmachine: Using API Version  1
	I0819 12:22:38.122232  152185 main.go:134] libmachine: () Calling .SetConfigRaw
	I0819 12:22:38.122620  152185 main.go:134] libmachine: () Calling .GetMachineName
	I0819 12:22:38.122855  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .GetMachineName
	I0819 12:22:38.122998  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .DriverName
	I0819 12:22:38.123145  152185 start.go:165] libmachine.API.Create for "stopped-upgrade-111717" (driver="kvm2")
	I0819 12:22:38.123172  152185 client.go:168] LocalClient.Create starting
	I0819 12:22:38.123208  152185 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 12:22:38.123243  152185 main.go:134] libmachine: Decoding PEM data...
	I0819 12:22:38.123260  152185 main.go:134] libmachine: Parsing certificate...
	I0819 12:22:38.123324  152185 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 12:22:38.123348  152185 main.go:134] libmachine: Decoding PEM data...
	I0819 12:22:38.123361  152185 main.go:134] libmachine: Parsing certificate...
	I0819 12:22:38.123403  152185 main.go:134] libmachine: Running pre-create checks...
	I0819 12:22:38.123414  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .PreCreateCheck
	I0819 12:22:38.123796  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .GetConfigRaw
	I0819 12:22:38.124229  152185 main.go:134] libmachine: Creating machine...
	I0819 12:22:38.124236  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .Create
	I0819 12:22:38.124439  152185 main.go:134] libmachine: (stopped-upgrade-111717) Creating KVM machine...
	I0819 12:22:38.125744  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | found existing default KVM network
	I0819 12:22:38.127139  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.126960  152225 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:b4:c0} reservation:<nil>}
	I0819 12:22:38.128230  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.128115  152225 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:ea:96} reservation:<nil>}
	I0819 12:22:38.129547  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.129462  152225 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b6a0}
	I0819 12:22:38.129568  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | created network xml: 
	I0819 12:22:38.129578  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | <network>
	I0819 12:22:38.129585  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   <name>mk-stopped-upgrade-111717</name>
	I0819 12:22:38.129602  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   <dns enable='no'/>
	I0819 12:22:38.129612  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   
	I0819 12:22:38.129622  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0819 12:22:38.129631  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |     <dhcp>
	I0819 12:22:38.129641  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0819 12:22:38.129649  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |     </dhcp>
	I0819 12:22:38.129656  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   </ip>
	I0819 12:22:38.129663  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   
	I0819 12:22:38.129668  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | </network>
	I0819 12:22:38.129677  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | 
	I0819 12:22:38.135408  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | trying to create private KVM network mk-stopped-upgrade-111717 192.168.61.0/24...
	I0819 12:22:38.212364  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | private KVM network mk-stopped-upgrade-111717 192.168.61.0/24 created
	I0819 12:22:38.212421  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.212323  152225 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:22:38.212444  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717 ...
	I0819 12:22:38.212468  152185 main.go:134] libmachine: (stopped-upgrade-111717) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso
	I0819 12:22:38.212487  152185 main.go:134] libmachine: (stopped-upgrade-111717) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso...
	I0819 12:22:38.420755  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.420591  152225 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717/id_rsa...
	I0819 12:22:38.514079  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.513961  152225 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717/stopped-upgrade-111717.rawdisk...
	I0819 12:22:38.514098  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Writing magic tar header
	I0819 12:22:38.514120  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Writing SSH key tar header
	I0819 12:22:38.514128  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.514077  152225 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717 ...
	I0819 12:22:38.514222  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717
	I0819 12:22:38.514228  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 12:22:38.514237  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717 (perms=drwx------)
	I0819 12:22:38.514249  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:22:38.514258  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 12:22:38.514264  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:22:38.514274  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 12:22:38.514280  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:22:38.514288  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:22:38.514293  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home
	I0819 12:22:38.514335  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 12:22:38.514359  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Skipping /home - not owner
	I0819 12:22:38.514372  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:22:38.514389  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:22:38.514398  152185 main.go:134] libmachine: (stopped-upgrade-111717) Creating domain...
	I0819 12:22:38.515495  152185 main.go:134] libmachine: (stopped-upgrade-111717) define libvirt domain using xml: 
	I0819 12:22:38.515518  152185 main.go:134] libmachine: (stopped-upgrade-111717) <domain type='kvm'>
	I0819 12:22:38.515531  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <name>stopped-upgrade-111717</name>
	I0819 12:22:38.515540  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <memory unit='MiB'>2200</memory>
	I0819 12:22:38.515548  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <vcpu>2</vcpu>
	I0819 12:22:38.515555  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <features>
	I0819 12:22:38.515564  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <acpi/>
	I0819 12:22:38.515571  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <apic/>
	I0819 12:22:38.515579  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <pae/>
	I0819 12:22:38.515585  152185 main.go:134] libmachine: (stopped-upgrade-111717)     
	I0819 12:22:38.515593  152185 main.go:134] libmachine: (stopped-upgrade-111717)   </features>
	I0819 12:22:38.515601  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <cpu mode='host-passthrough'>
	I0819 12:22:38.515610  152185 main.go:134] libmachine: (stopped-upgrade-111717)   
	I0819 12:22:38.515618  152185 main.go:134] libmachine: (stopped-upgrade-111717)   </cpu>
	I0819 12:22:38.515626  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <os>
	I0819 12:22:38.515640  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <type>hvm</type>
	I0819 12:22:38.515649  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <boot dev='cdrom'/>
	I0819 12:22:38.515657  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <boot dev='hd'/>
	I0819 12:22:38.515667  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <bootmenu enable='no'/>
	I0819 12:22:38.515674  152185 main.go:134] libmachine: (stopped-upgrade-111717)   </os>
	I0819 12:22:38.515681  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <devices>
	I0819 12:22:38.515690  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <disk type='file' device='cdrom'>
	I0819 12:22:38.515702  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717/boot2docker.iso'/>
	I0819 12:22:38.515710  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <target dev='hdc' bus='scsi'/>
	I0819 12:22:38.515760  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <readonly/>
	I0819 12:22:38.515776  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </disk>
	I0819 12:22:38.515789  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <disk type='file' device='disk'>
	I0819 12:22:38.515795  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:22:38.515804  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717/stopped-upgrade-111717.rawdisk'/>
	I0819 12:22:38.515810  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <target dev='hda' bus='virtio'/>
	I0819 12:22:38.515815  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </disk>
	I0819 12:22:38.515820  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <interface type='network'>
	I0819 12:22:38.515826  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <source network='mk-stopped-upgrade-111717'/>
	I0819 12:22:38.515831  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <model type='virtio'/>
	I0819 12:22:38.515836  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </interface>
	I0819 12:22:38.515841  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <interface type='network'>
	I0819 12:22:38.515847  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <source network='default'/>
	I0819 12:22:38.515855  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <model type='virtio'/>
	I0819 12:22:38.515865  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </interface>
	I0819 12:22:38.515873  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <serial type='pty'>
	I0819 12:22:38.515880  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <target port='0'/>
	I0819 12:22:38.515890  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </serial>
	I0819 12:22:38.515895  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <console type='pty'>
	I0819 12:22:38.515900  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <target type='serial' port='0'/>
	I0819 12:22:38.515905  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </console>
	I0819 12:22:38.515910  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <rng model='virtio'>
	I0819 12:22:38.515916  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <backend model='random'>/dev/random</backend>
	I0819 12:22:38.515920  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </rng>
	I0819 12:22:38.515925  152185 main.go:134] libmachine: (stopped-upgrade-111717)     
	I0819 12:22:38.515929  152185 main.go:134] libmachine: (stopped-upgrade-111717)     
	I0819 12:22:38.515934  152185 main.go:134] libmachine: (stopped-upgrade-111717)   </devices>
	I0819 12:22:38.515938  152185 main.go:134] libmachine: (stopped-upgrade-111717) </domain>
	I0819 12:22:38.515946  152185 main.go:134] libmachine: (stopped-upgrade-111717) 
	I0819 12:22:38.520589  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:2e:3e:7d in network default
	I0819 12:22:38.521194  152185 main.go:134] libmachine: (stopped-upgrade-111717) Ensuring networks are active...
	I0819 12:22:38.521256  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:38.521965  152185 main.go:134] libmachine: (stopped-upgrade-111717) Ensuring network default is active
	I0819 12:22:38.522273  152185 main.go:134] libmachine: (stopped-upgrade-111717) Ensuring network mk-stopped-upgrade-111717 is active
	I0819 12:22:38.522726  152185 main.go:134] libmachine: (stopped-upgrade-111717) Getting domain xml...
	I0819 12:22:38.523400  152185 main.go:134] libmachine: (stopped-upgrade-111717) Creating domain...
	I0819 12:22:37.874249  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:22:37.874286  152005 machine.go:96] duration metric: took 6.328260286s to provisionDockerMachine
	I0819 12:22:37.874305  152005 start.go:293] postStartSetup for "pause-732494" (driver="kvm2")
	I0819 12:22:37.874327  152005 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:22:37.874357  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:37.874822  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:22:37.874853  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:37.878095  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:37.878564  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:37.878590  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:37.878780  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:37.878991  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:37.879159  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:37.879310  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:37.959139  152005 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:22:37.963530  152005 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:22:37.963575  152005 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:22:37.963662  152005 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:22:37.963784  152005 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:22:37.963908  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:22:37.973387  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:22:37.997255  152005 start.go:296] duration metric: took 122.932444ms for postStartSetup
	I0819 12:22:37.997302  152005 fix.go:56] duration metric: took 6.47677026s for fixHost
	I0819 12:22:37.997324  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.000043  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.000434  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.000464  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.000645  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.000845  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.001023  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.001221  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.001377  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:38.001610  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:38.001627  152005 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:22:38.100660  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724070158.089732951
	
	I0819 12:22:38.100685  152005 fix.go:216] guest clock: 1724070158.089732951
	I0819 12:22:38.100694  152005 fix.go:229] Guest: 2024-08-19 12:22:38.089732951 +0000 UTC Remote: 2024-08-19 12:22:37.997306217 +0000 UTC m=+16.826858779 (delta=92.426734ms)
	I0819 12:22:38.100740  152005 fix.go:200] guest clock delta is within tolerance: 92.426734ms
	I0819 12:22:38.100747  152005 start.go:83] releasing machines lock for "pause-732494", held for 6.580248803s
	I0819 12:22:38.100776  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.101106  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:38.103703  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.104187  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.104221  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.104404  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105033  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105240  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105344  152005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:22:38.105402  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.105438  152005 ssh_runner.go:195] Run: cat /version.json
	I0819 12:22:38.105461  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.108213  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108474  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108533  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.108562  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108718  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.108855  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.108877  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108922  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.109024  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.109093  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.109194  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.109283  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:38.109321  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.109440  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:38.185742  152005 ssh_runner.go:195] Run: systemctl --version
	I0819 12:22:38.206055  152005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:22:38.361478  152005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:22:38.367061  152005 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:22:38.367127  152005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:22:38.377122  152005 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:22:38.377164  152005 start.go:495] detecting cgroup driver to use...
	I0819 12:22:38.377244  152005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:22:38.393342  152005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:22:38.413251  152005 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:22:38.413325  152005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:22:38.429264  152005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:22:38.447906  152005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:22:38.584508  152005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:22:38.718505  152005 docker.go:233] disabling docker service ...
	I0819 12:22:38.718594  152005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:22:38.735006  152005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:22:38.748877  152005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:22:38.889235  152005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:22:39.021023  152005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:22:39.036456  152005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:22:39.056923  152005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:22:39.057001  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.068804  152005 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:22:39.068892  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.079491  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.091040  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.101810  152005 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:22:39.112971  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.123661  152005 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.136119  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.146744  152005 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:22:39.156992  152005 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:22:39.167623  152005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:22:39.307935  152005 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:22:39.697080  152005 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:22:39.697161  152005 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:22:39.708704  152005 start.go:563] Will wait 60s for crictl version
	I0819 12:22:39.708783  152005 ssh_runner.go:195] Run: which crictl
	I0819 12:22:39.724808  152005 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:22:39.912391  152005 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:22:39.912538  152005 ssh_runner.go:195] Run: crio --version
	I0819 12:22:40.192674  152005 ssh_runner.go:195] Run: crio --version
	I0819 12:22:40.415606  152005 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:22:40.416872  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:40.420351  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:40.420702  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:40.420732  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:40.420980  152005 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:22:40.440755  152005 kubeadm.go:883] updating cluster {Name:pause-732494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:22:40.440940  152005 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:22:40.441009  152005 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:22:40.564470  152005 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:22:40.564505  152005 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:22:40.564571  152005 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:22:40.670891  152005 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:22:40.670915  152005 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:22:40.670923  152005 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.31.0 crio true true} ...
	I0819 12:22:40.671031  152005 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-732494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:22:40.671111  152005 ssh_runner.go:195] Run: crio config
	I0819 12:22:40.751848  152005 cni.go:84] Creating CNI manager for ""
	I0819 12:22:40.751875  152005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:22:40.751889  152005 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:22:40.751921  152005 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-732494 NodeName:pause-732494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:22:40.752093  152005 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-732494"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:22:40.752168  152005 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:22:40.764821  152005 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:22:40.764907  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:22:40.778039  152005 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 12:22:40.801052  152005 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:22:40.826516  152005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 12:22:40.860431  152005 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0819 12:22:40.867074  152005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:22:41.153303  152005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:22:41.178057  152005 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494 for IP: 192.168.39.24
	I0819 12:22:41.178082  152005 certs.go:194] generating shared ca certs ...
	I0819 12:22:41.178103  152005 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:41.178290  152005 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:22:41.178351  152005 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:22:41.178368  152005 certs.go:256] generating profile certs ...
	I0819 12:22:41.178484  152005 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/client.key
	I0819 12:22:41.178565  152005 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.key.96bc570c
	I0819 12:22:41.178616  152005 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.key
	I0819 12:22:41.178769  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:22:41.178814  152005 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:22:41.178828  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:22:41.178862  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:22:41.178898  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:22:41.178931  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:22:41.178987  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:22:41.179867  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:22:41.215204  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:22:39.833804  152185 main.go:134] libmachine: (stopped-upgrade-111717) Waiting to get IP...
	I0819 12:22:39.834886  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:39.835485  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:39.835542  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:39.835460  152225 retry.go:31] will retry after 237.082125ms: waiting for machine to come up
	I0819 12:22:40.074371  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:40.075108  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:40.075134  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:40.075064  152225 retry.go:31] will retry after 295.061023ms: waiting for machine to come up
	I0819 12:22:40.371783  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:40.372319  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:40.372341  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:40.372278  152225 retry.go:31] will retry after 361.181319ms: waiting for machine to come up
	I0819 12:22:40.734926  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:40.735439  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:40.735462  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:40.735392  152225 retry.go:31] will retry after 377.649372ms: waiting for machine to come up
	I0819 12:22:41.115222  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:41.115830  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:41.115851  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:41.115773  152225 retry.go:31] will retry after 695.776357ms: waiting for machine to come up
	I0819 12:22:41.812870  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:41.813366  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:41.813386  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:41.813315  152225 retry.go:31] will retry after 598.994886ms: waiting for machine to come up
	I0819 12:22:42.414129  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:42.414752  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:42.414777  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:42.414692  152225 retry.go:31] will retry after 988.941212ms: waiting for machine to come up
	I0819 12:22:43.405260  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:43.405805  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:43.405828  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:43.405741  152225 retry.go:31] will retry after 1.097996222s: waiting for machine to come up
	I0819 12:22:44.505029  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:44.505654  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:44.505672  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:44.505602  152225 retry.go:31] will retry after 1.420200785s: waiting for machine to come up
	I0819 12:22:41.258684  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:22:41.305321  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:22:41.332940  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 12:22:41.365978  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:22:41.393037  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:22:41.419348  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:22:41.450462  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:22:41.479646  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:22:41.514370  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:22:41.546927  152005 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:22:41.572743  152005 ssh_runner.go:195] Run: openssl version
	I0819 12:22:41.578664  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:22:41.590058  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.596800  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.596868  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.602769  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:22:41.618424  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:22:41.631176  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.636071  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.636154  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.641886  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:22:41.652357  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:22:41.664754  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.669272  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.669356  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.675323  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:22:41.685164  152005 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:22:41.689760  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:22:41.695350  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:22:41.701889  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:22:41.707860  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:22:41.713486  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:22:41.719094  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:22:41.726907  152005 kubeadm.go:392] StartCluster: {Name:pause-732494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:22:41.727083  152005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:22:41.727145  152005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:22:41.813855  152005 cri.go:89] found id: "f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d"
	I0819 12:22:41.813881  152005 cri.go:89] found id: "d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d"
	I0819 12:22:41.813888  152005 cri.go:89] found id: "e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60"
	I0819 12:22:41.813892  152005 cri.go:89] found id: "0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59"
	I0819 12:22:41.813897  152005 cri.go:89] found id: "641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a"
	I0819 12:22:41.813902  152005 cri.go:89] found id: "c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612"
	I0819 12:22:41.813906  152005 cri.go:89] found id: "e5ae2af15481ac5157a34eeb2c75068066569366d3e36ae802b0422fcb487d5f"
	I0819 12:22:41.813912  152005 cri.go:89] found id: "71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20"
	I0819 12:22:41.813917  152005 cri.go:89] found id: "65c4c92bbc54f2f7bf61f448bcfdc2da3729dc17648d995c3ff9d60dfa47695e"
	I0819 12:22:41.813925  152005 cri.go:89] found id: "9962cad310005442266bfd1020886340d9bf21d8ef66a09a62236768f681bb7d"
	I0819 12:22:41.813930  152005 cri.go:89] found id: "6edfdd09f92c9948c43a5a94428c6cf6442587d0219b11057bbc6093ee3c00ff"
	I0819 12:22:41.813934  152005 cri.go:89] found id: "b48be31cf132a32ee25c0f7f53defa53fb04dd4b0288a1d3859f0d01a25b03ce"
	I0819 12:22:41.813939  152005 cri.go:89] found id: "2bb9efcb02cd6e18c239d0ebd154a76fef9d91391a48cc48f377111051f3f1e2"
	I0819 12:22:41.813946  152005 cri.go:89] found id: ""
	I0819 12:22:41.814000  152005 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.947975880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18fb8538-bd6b-4177-897f-cf143f43b47c name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.949054346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1341b033-c280-4456-9797-903cd9bc8abb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.949422596Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070207949401302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1341b033-c280-4456-9797-903cd9bc8abb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.950063008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce5a84ee-bff3-4beb-9732-305699cb8632 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.950138944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce5a84ee-bff3-4beb-9732-305699cb8632 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.950434525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070188044038410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070184408627339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070184416361624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070184421851334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070184378365435,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070181806017331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070161013361508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070160286354380,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070160221136026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070160173816925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-73249
4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070160027600255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070160090947392,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20,PodSandboxId:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070129367789151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-
6452ab172aae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce5a84ee-bff3-4beb-9732-305699cb8632 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.996924159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b35fe65b-68d7-4321-a61b-86ded17b5e27 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.997012621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b35fe65b-68d7-4321-a61b-86ded17b5e27 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.997881676Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4bddb129-4162-4455-9697-f9b1a75a141b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.998273447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070207998251253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bddb129-4162-4455-9697-f9b1a75a141b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.999002494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26553551-e089-4ebd-bb91-de694445e2fe name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.999090818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26553551-e089-4ebd-bb91-de694445e2fe name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:27 pause-732494 crio[2294]: time="2024-08-19 12:23:27.999622267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070188044038410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070184408627339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070184416361624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070184421851334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070184378365435,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070181806017331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070161013361508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070160286354380,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070160221136026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070160173816925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-73249
4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070160027600255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070160090947392,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20,PodSandboxId:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070129367789151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-
6452ab172aae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26553551-e089-4ebd-bb91-de694445e2fe name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.011386541Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4777f50a-bcbe-4758-b779-ee03f4d65bd3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.011588635Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-njb6z,Uid:7e6d1896-a35f-4c32-af3c-0cfcad6829a5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724070159940611921,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:22:08.694092545Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-732494,Uid:07c695b284933de9b58e4af8ab9fe584,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1724070159914213970,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 07c695b284933de9b58e4af8ab9fe584,kubernetes.io/config.seen: 2024-08-19T12:22:03.365849570Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&PodSandboxMetadata{Name:etcd-pause-732494,Uid:1ef9fe889f780f0982931809bbf7d2ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724070159912207967,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,tier: control-plane,},Annotations:map
[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.24:2379,kubernetes.io/config.hash: 1ef9fe889f780f0982931809bbf7d2ec,kubernetes.io/config.seen: 2024-08-19T12:22:03.365851372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-732494,Uid:2e4a1243f79a8e11c8ec728acd096082,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724070159805197695,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2e4a1243f79a8e11c8ec728acd096082,kubernetes.io/config.seen: 2024-08-19T12:22:03.365846319Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{
Id:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-732494,Uid:a65855e373718b09045fe1e68c3ae64a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724070159800253061,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.24:8443,kubernetes.io/config.hash: a65855e373718b09045fe1e68c3ae64a,kubernetes.io/config.seen: 2024-08-19T12:22:03.365852944Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&PodSandboxMetadata{Name:kube-proxy-4wpw2,Uid:970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Creat
edAt:1724070159769944928,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T12:22:08.599817364Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-ftmpt,Uid:5264410a-eee3-42c3-9a5c-6452ab172aae,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724070128997408745,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-6452ab172aae,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2024-08-19T12:22:08.687502370Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4777f50a-bcbe-4758-b779-ee03f4d65bd3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.012146462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80baceb2-8085-491b-b10d-0f8a8310bb57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.012204291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80baceb2-8085-491b-b10d-0f8a8310bb57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.012467779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070188044038410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070184408627339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070184416361624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070184421851334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070184378365435,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070181806017331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070161013361508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070160286354380,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070160221136026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070160173816925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-73249
4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070160027600255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070160090947392,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20,PodSandboxId:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070129367789151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-
6452ab172aae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80baceb2-8085-491b-b10d-0f8a8310bb57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.046019082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9ad605c-f40a-4b80-b0cc-fdee249dc3b7 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.046092175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9ad605c-f40a-4b80-b0cc-fdee249dc3b7 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.047373090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fd2a283-4602-4c73-8c79-262cecb2ac73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.048395048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070208048365822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fd2a283-4602-4c73-8c79-262cecb2ac73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.049108460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75b6afc3-81e3-488c-a7ac-490f2a777396 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.049285834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75b6afc3-81e3-488c-a7ac-490f2a777396 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:28 pause-732494 crio[2294]: time="2024-08-19 12:23:28.050426015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070188044038410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070184408627339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070184416361624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070184421851334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070184378365435,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070181806017331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070161013361508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070160286354380,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070160221136026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070160173816925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-73249
4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070160027600255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070160090947392,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20,PodSandboxId:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070129367789151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-
6452ab172aae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75b6afc3-81e3-488c-a7ac-490f2a777396 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5472142059dc1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   20 seconds ago       Running             kube-proxy                2                   1ffd10874e037       kube-proxy-4wpw2
	2ab01352ae9be       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 seconds ago       Running             kube-apiserver            2                   237fab524d854       kube-apiserver-pause-732494
	11fa5592e91c9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   23 seconds ago       Running             kube-scheduler            2                   0d620d042fc8d       kube-scheduler-pause-732494
	8877c6b07f850       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago       Running             etcd                      2                   b5f58edd9ed7d       etcd-pause-732494
	5e94100bd01e2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   23 seconds ago       Running             kube-controller-manager   2                   e4d65ecc1c722       kube-controller-manager-pause-732494
	ee34eb433386d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago       Running             coredns                   2                   1521c494ddb4d       coredns-6f6b679f8f-njb6z
	f709bf5da6fd4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   47 seconds ago       Exited              coredns                   1                   1521c494ddb4d       coredns-6f6b679f8f-njb6z
	d046b6b14ce67       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   47 seconds ago       Exited              kube-scheduler            1                   0d620d042fc8d       kube-scheduler-pause-732494
	e3d093640f912       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   47 seconds ago       Exited              etcd                      1                   b5f58edd9ed7d       etcd-pause-732494
	0125c48d70b7f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   47 seconds ago       Exited              kube-controller-manager   1                   e4d65ecc1c722       kube-controller-manager-pause-732494
	641b6edcbd83a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   48 seconds ago       Exited              kube-apiserver            1                   237fab524d854       kube-apiserver-pause-732494
	c60e87585c18b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   48 seconds ago       Exited              kube-proxy                1                   1ffd10874e037       kube-proxy-4wpw2
	71ea972f68efb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   b8ece35314714       coredns-6f6b679f8f-ftmpt
	
	
	==> coredns [71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51740 - 3507 "HINFO IN 5015129589594737323.4920509428964378509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018596609s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=458": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=458": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=456": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=456": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=458": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=458": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47571 - 13684 "HINFO IN 89910630764924702.7204132197362725527. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.074883099s
	
	
	==> describe nodes <==
	Name:               pause-732494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-732494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=pause-732494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_22_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:22:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-732494
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:23:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:23:07 +0000   Mon, 19 Aug 2024 12:21:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:23:07 +0000   Mon, 19 Aug 2024 12:21:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:23:07 +0000   Mon, 19 Aug 2024 12:21:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:23:07 +0000   Mon, 19 Aug 2024 12:22:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    pause-732494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0dad5176af54959be2b433942849eb5
	  System UUID:                c0dad517-6af5-4959-be2b-433942849eb5
	  Boot ID:                    235f441c-14fa-4c4e-836a-10850720ff7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-njb6z                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     80s
	  kube-system                 etcd-pause-732494                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         85s
	  kube-system                 kube-apiserver-pause-732494             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-732494    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-4wpw2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-732494             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s                kubelet          Node pause-732494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s                kubelet          Node pause-732494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s                kubelet          Node pause-732494 status is now: NodeHasSufficientPID
	  Normal  NodeReady                84s                kubelet          Node pause-732494 status is now: NodeReady
	  Normal  RegisteredNode           81s                node-controller  Node pause-732494 event: Registered Node pause-732494 in Controller
	  Normal  RegisteredNode           41s                node-controller  Node pause-732494 event: Registered Node pause-732494 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)  kubelet          Node pause-732494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)  kubelet          Node pause-732494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)  kubelet          Node pause-732494 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node pause-732494 event: Registered Node pause-732494 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.789431] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.063638] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065002] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.189384] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.122451] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.295195] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.136912] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.759014] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.065578] kauditd_printk_skb: 158 callbacks suppressed
	[Aug19 12:22] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.089547] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.286281] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +0.101667] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.550639] kauditd_printk_skb: 98 callbacks suppressed
	[ +18.325564] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.131583] systemd-fstab-generator[2225]: Ignoring "noauto" option for root device
	[  +0.169896] systemd-fstab-generator[2239]: Ignoring "noauto" option for root device
	[  +0.139868] systemd-fstab-generator[2251]: Ignoring "noauto" option for root device
	[  +0.282909] systemd-fstab-generator[2279]: Ignoring "noauto" option for root device
	[  +1.745939] systemd-fstab-generator[2906]: Ignoring "noauto" option for root device
	[  +4.559311] kauditd_printk_skb: 203 callbacks suppressed
	[Aug19 12:23] systemd-fstab-generator[3400]: Ignoring "noauto" option for root device
	[  +4.645021] kauditd_printk_skb: 45 callbacks suppressed
	[ +16.199377] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	
	
	==> etcd [8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6] <==
	{"level":"info","ts":"2024-08-19T12:23:12.780548Z","caller":"traceutil/trace.go:171","msg":"trace[476268379] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:521; }","duration":"419.015699ms","start":"2024-08-19T12:23:12.361498Z","end":"2024-08-19T12:23:12.780513Z","steps":["trace[476268379] 'agreement among raft nodes before linearized reading'  (duration: 415.767492ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:12.780637Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.361469Z","time spent":"419.153468ms","remote":"127.0.0.1:38840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4136,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-08-19T12:23:12.780827Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.358736Z","time spent":"422.082467ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5862,"request content":"key:\"/registry/pods/kube-system/etcd-pause-732494\" "}
	{"level":"warn","ts":"2024-08-19T12:23:13.556300Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"460.19134ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654388325119112165 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:519 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T12:23:13.556578Z","caller":"traceutil/trace.go:171","msg":"trace[963687503] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"758.327102ms","start":"2024-08-19T12:23:12.798216Z","end":"2024-08-19T12:23:13.556543Z","steps":["trace[963687503] 'process raft request'  (duration: 758.251797ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.556691Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.798203Z","time spent":"758.449846ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5132,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" mod_revision:487 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" value_size:5073 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.556922Z","caller":"traceutil/trace.go:171","msg":"trace[286289547] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"761.02589ms","start":"2024-08-19T12:23:12.795885Z","end":"2024-08-19T12:23:13.556911Z","steps":["trace[286289547] 'process raft request'  (duration: 300.165079ms)","trace[286289547] 'compare'  (duration: 459.873177ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T12:23:13.556998Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.795865Z","time spent":"761.101207ms","remote":"127.0.0.1:38840","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:519 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.557182Z","caller":"traceutil/trace.go:171","msg":"trace[277296740] linearizableReadLoop","detail":"{readStateIndex:559; appliedIndex:558; }","duration":"759.777261ms","start":"2024-08-19T12:23:12.797397Z","end":"2024-08-19T12:23:13.557175Z","steps":["trace[277296740] 'read index received'  (duration: 298.66299ms)","trace[277296740] 'applied index is now lower than readState.Index'  (duration: 461.113221ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T12:23:13.557299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"759.897068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-732494\" ","response":"range_response_count:1 size:5840"}
	{"level":"info","ts":"2024-08-19T12:23:13.557338Z","caller":"traceutil/trace.go:171","msg":"trace[916470048] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-732494; range_end:; response_count:1; response_revision:523; }","duration":"759.938486ms","start":"2024-08-19T12:23:12.797394Z","end":"2024-08-19T12:23:13.557332Z","steps":["trace[916470048] 'agreement among raft nodes before linearized reading'  (duration: 759.848682ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.557373Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.797362Z","time spent":"760.006205ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5862,"request content":"key:\"/registry/pods/kube-system/etcd-pause-732494\" "}
	{"level":"warn","ts":"2024-08-19T12:23:13.957968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.016433ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654388325119112170 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" mod_revision:523 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" value_size:4895 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T12:23:13.958041Z","caller":"traceutil/trace.go:171","msg":"trace[1619200175] linearizableReadLoop","detail":"{readStateIndex:561; appliedIndex:560; }","duration":"384.660959ms","start":"2024-08-19T12:23:13.573371Z","end":"2024-08-19T12:23:13.958031Z","steps":["trace[1619200175] 'read index received'  (duration: 201.538039ms)","trace[1619200175] 'applied index is now lower than readState.Index'  (duration: 183.122181ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T12:23:13.958119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.745904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-732494\" ","response":"range_response_count:1 size:5840"}
	{"level":"info","ts":"2024-08-19T12:23:13.958147Z","caller":"traceutil/trace.go:171","msg":"trace[979662940] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-732494; range_end:; response_count:1; response_revision:524; }","duration":"384.774534ms","start":"2024-08-19T12:23:13.573367Z","end":"2024-08-19T12:23:13.958142Z","steps":["trace[979662940] 'agreement among raft nodes before linearized reading'  (duration: 384.69204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.958170Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.573342Z","time spent":"384.822353ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5862,"request content":"key:\"/registry/pods/kube-system/etcd-pause-732494\" "}
	{"level":"info","ts":"2024-08-19T12:23:13.958279Z","caller":"traceutil/trace.go:171","msg":"trace[1866670096] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"385.297251ms","start":"2024-08-19T12:23:13.572969Z","end":"2024-08-19T12:23:13.958266Z","steps":["trace[1866670096] 'process raft request'  (duration: 201.930255ms)","trace[1866670096] 'compare'  (duration: 182.926104ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T12:23:13.958519Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.572953Z","time spent":"385.522609ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4954,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" mod_revision:523 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" value_size:4895 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.959151Z","caller":"traceutil/trace.go:171","msg":"trace[385061752] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"376.722751ms","start":"2024-08-19T12:23:13.582403Z","end":"2024-08-19T12:23:13.959126Z","steps":["trace[385061752] 'process raft request'  (duration: 376.188929ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.959241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.582389Z","time spent":"376.808721ms","remote":"127.0.0.1:38852","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3782,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-6f6b679f8f\" mod_revision:521 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-6f6b679f8f\" value_size:3722 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-6f6b679f8f\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.960922Z","caller":"traceutil/trace.go:171","msg":"trace[793178495] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"382.986765ms","start":"2024-08-19T12:23:13.577923Z","end":"2024-08-19T12:23:13.960910Z","steps":["trace[793178495] 'process raft request'  (duration: 380.641271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.962056Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.577909Z","time spent":"384.057397ms","remote":"127.0.0.1:38646","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-lvd7l\" mod_revision:520 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-lvd7l\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-lvd7l\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.964757Z","caller":"traceutil/trace.go:171","msg":"trace[2021394293] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"390.986637ms","start":"2024-08-19T12:23:13.573458Z","end":"2024-08-19T12:23:13.964445Z","steps":["trace[2021394293] 'process raft request'  (duration: 385.056734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.965737Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.573450Z","time spent":"391.5717ms","remote":"127.0.0.1:38530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:518 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	
	
	==> etcd [e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60] <==
	{"level":"info","ts":"2024-08-19T12:22:42.898419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:22:42.898444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgPreVoteResp from 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2024-08-19T12:22:42.898458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:22:42.898464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2024-08-19T12:22:42.898472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:22:42.898479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2024-08-19T12:22:42.900891Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:pause-732494 ClientURLs:[https://192.168.39.24:2379]}","request-path":"/0/members/602226ed500416f5/attributes","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:22:42.901113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:22:42.901177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:22:42.901495Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:22:42.901534Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:22:42.902210Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:22:42.902185Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:22:42.902941Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2024-08-19T12:22:42.903374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:22:51.947368Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T12:22:51.947427Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-732494","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	{"level":"warn","ts":"2024-08-19T12:22:51.947543Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:22:51.947653Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:22:51.972633Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:22:51.972685Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T12:22:51.972777Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602226ed500416f5","current-leader-member-id":"602226ed500416f5"}
	{"level":"info","ts":"2024-08-19T12:22:51.979837Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-08-19T12:22:51.980012Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-08-19T12:22:51.980036Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-732494","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	
	
	==> kernel <==
	 12:23:28 up 1 min,  0 users,  load average: 1.04, 0.49, 0.18
	Linux pause-732494 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce] <==
	I0819 12:23:07.641066       1 policy_source.go:224] refreshing policies
	I0819 12:23:07.669426       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:23:07.673492       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:23:07.674211       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:23:07.674252       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:23:07.674262       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:23:07.674270       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:23:07.727049       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:23:07.728324       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:23:07.728550       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:23:07.728614       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:23:07.730555       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:23:07.731486       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:23:07.731513       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:23:07.760343       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:23:07.761977       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0819 12:23:07.793007       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 12:23:08.533776       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:23:09.033041       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:23:09.047133       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:23:09.096983       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:23:09.141247       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:23:09.154265       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:23:10.959631       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:23:11.846935       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a] <==
	W0819 12:23:01.439604       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.439608       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.446137       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.486779       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.488054       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.491523       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.492882       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.499274       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.592483       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.610084       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.658169       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.674776       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.686570       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.731004       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.760070       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.783001       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.799529       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.852563       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.871463       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.874317       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.024050       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.052479       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.185209       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.235126       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.240037       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59] <==
	I0819 12:22:47.458769       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-732494"
	I0819 12:22:47.458824       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 12:22:47.461295       1 shared_informer.go:320] Caches are synced for persistent volume
	I0819 12:22:47.463812       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0819 12:22:47.466743       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0819 12:22:47.469030       1 shared_informer.go:320] Caches are synced for PVC protection
	I0819 12:22:47.470312       1 shared_informer.go:320] Caches are synced for GC
	I0819 12:22:47.470394       1 shared_informer.go:320] Caches are synced for ephemeral
	I0819 12:22:47.470775       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0819 12:22:47.472153       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0819 12:22:47.472240       1 shared_informer.go:320] Caches are synced for crt configmap
	I0819 12:22:47.474788       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 12:22:47.478193       1 shared_informer.go:320] Caches are synced for endpoint
	I0819 12:22:47.487796       1 shared_informer.go:320] Caches are synced for deployment
	I0819 12:22:47.494772       1 shared_informer.go:320] Caches are synced for cronjob
	I0819 12:22:47.497064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="72.747322ms"
	I0819 12:22:47.497797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="49.361µs"
	I0819 12:22:47.569979       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0819 12:22:47.673127       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 12:22:47.674784       1 shared_informer.go:320] Caches are synced for stateful set
	I0819 12:22:47.679944       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:22:47.685748       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:22:48.084425       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 12:22:48.084532       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 12:22:48.114955       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1] <==
	I0819 12:23:10.949501       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-732494"
	I0819 12:23:10.949555       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 12:23:10.949744       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 12:23:10.950081       1 shared_informer.go:320] Caches are synced for job
	I0819 12:23:10.951691       1 shared_informer.go:320] Caches are synced for service account
	I0819 12:23:10.951894       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0819 12:23:10.960947       1 shared_informer.go:320] Caches are synced for namespace
	I0819 12:23:10.964403       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0819 12:23:10.967217       1 shared_informer.go:320] Caches are synced for stateful set
	I0819 12:23:10.968603       1 shared_informer.go:320] Caches are synced for TTL
	I0819 12:23:11.006370       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0819 12:23:11.008047       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0819 12:23:11.065352       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:23:11.124017       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:23:11.148746       1 shared_informer.go:320] Caches are synced for attach detach
	I0819 12:23:11.200039       1 shared_informer.go:320] Caches are synced for PV protection
	I0819 12:23:11.200107       1 shared_informer.go:320] Caches are synced for persistent volume
	I0819 12:23:11.585875       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 12:23:11.648223       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 12:23:11.648266       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 12:23:12.783953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="1.819467818s"
	I0819 12:23:12.784347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="86.664µs"
	I0819 12:23:13.982249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="416.745273ms"
	I0819 12:23:14.014575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="32.192362ms"
	I0819 12:23:14.014847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="79.695µs"
	
	
	==> kube-proxy [5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:23:08.197082       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:23:08.210454       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	E0819 12:23:08.210610       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:23:08.240350       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:23:08.240400       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:23:08.240445       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:23:08.242808       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:23:08.243052       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:23:08.243080       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:23:08.244797       1 config.go:197] "Starting service config controller"
	I0819 12:23:08.244821       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:23:08.244846       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:23:08.244851       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:23:08.245138       1 config.go:326] "Starting node config controller"
	I0819 12:23:08.245163       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:23:08.345894       1 shared_informer.go:320] Caches are synced for node config
	I0819 12:23:08.346010       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:23:08.346025       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612] <==
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:22:42.319913       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:22:44.170628       1 server.go:666] "Failed to retrieve node info" err="nodes \"pause-732494\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"
	I0819 12:22:45.331834       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	E0819 12:22:45.332031       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:22:45.364546       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:22:45.364654       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:22:45.364719       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:22:45.367337       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:22:45.367785       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:22:45.367893       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:22:45.369156       1 config.go:197] "Starting service config controller"
	I0819 12:22:45.369260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:22:45.369333       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:22:45.369381       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:22:45.369402       1 config.go:326] "Starting node config controller"
	I0819 12:22:45.369426       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:22:45.470261       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:22:45.470320       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:22:45.470422       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362] <==
	I0819 12:23:05.359265       1 serving.go:386] Generated self-signed cert in-memory
	W0819 12:23:07.592212       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:23:07.592353       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:23:07.592432       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:23:07.592473       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:23:07.657648       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 12:23:07.657780       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:23:07.668771       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 12:23:07.669018       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 12:23:07.669896       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:23:07.669936       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:23:07.770938       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d] <==
	I0819 12:22:42.352367       1 serving.go:386] Generated self-signed cert in-memory
	W0819 12:22:44.108273       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:22:44.108319       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:22:44.108330       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:22:44.108336       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:22:44.164530       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 12:22:44.164598       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:22:44.172487       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:22:44.172539       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:22:44.176784       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 12:22:44.176855       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0819 12:22:44.193205       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:22:44.193296       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 12:22:45.773169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:22:51.815941       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.358974    3407 scope.go:117] "RemoveContainer" containerID="e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.360782    3407 scope.go:117] "RemoveContainer" containerID="641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.361106    3407 scope.go:117] "RemoveContainer" containerID="0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.364081    3407 scope.go:117] "RemoveContainer" containerID="d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: E0819 12:23:04.365964    3407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-732494?timeout=10s\": dial tcp 192.168.39.24:8443: connect: connection refused" interval="800ms"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.576783    3407 kubelet_node_status.go:72] "Attempting to register node" node="pause-732494"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: E0819 12:23:04.578340    3407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.24:8443: connect: connection refused" node="pause-732494"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: W0819 12:23:04.647392    3407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.24:8443: connect: connection refused
	Aug 19 12:23:04 pause-732494 kubelet[3407]: E0819 12:23:04.647504    3407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: W0819 12:23:04.738139    3407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.39.24:8443: connect: connection refused
	Aug 19 12:23:04 pause-732494 kubelet[3407]: E0819 12:23:04.738223    3407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError"
	Aug 19 12:23:05 pause-732494 kubelet[3407]: I0819 12:23:05.379894    3407 kubelet_node_status.go:72] "Attempting to register node" node="pause-732494"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.712508    3407 kubelet_node_status.go:111] "Node was previously registered" node="pause-732494"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.712613    3407 kubelet_node_status.go:75] "Successfully registered node" node="pause-732494"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.712656    3407 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.713620    3407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.722733    3407 apiserver.go:52] "Watching apiserver"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.747044    3407 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.806584    3407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/970f3a70-eaec-4fa5-805b-d73e1d0b5bd5-lib-modules\") pod \"kube-proxy-4wpw2\" (UID: \"970f3a70-eaec-4fa5-805b-d73e1d0b5bd5\") " pod="kube-system/kube-proxy-4wpw2"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.806688    3407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/970f3a70-eaec-4fa5-805b-d73e1d0b5bd5-xtables-lock\") pod \"kube-proxy-4wpw2\" (UID: \"970f3a70-eaec-4fa5-805b-d73e1d0b5bd5\") " pod="kube-system/kube-proxy-4wpw2"
	Aug 19 12:23:08 pause-732494 kubelet[3407]: I0819 12:23:08.029076    3407 scope.go:117] "RemoveContainer" containerID="c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612"
	Aug 19 12:23:13 pause-732494 kubelet[3407]: E0819 12:23:13.878680    3407 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070193877396767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:23:13 pause-732494 kubelet[3407]: E0819 12:23:13.878757    3407 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070193877396767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:23:23 pause-732494 kubelet[3407]: E0819 12:23:23.882256    3407 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070203881635169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:23:23 pause-732494 kubelet[3407]: E0819 12:23:23.882967    3407 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070203881635169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:23:27.622503  152621 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19476-99410/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-732494 -n pause-732494
helpers_test.go:261: (dbg) Run:  kubectl --context pause-732494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-732494 -n pause-732494
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-732494 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-732494 logs -n 25: (1.250998926s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-787042 sudo crio            | cilium-787042             | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-787042                      | cilium-787042             | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC | 19 Aug 24 12:18 UTC |
	| start   | -p force-systemd-flag-557690          | force-systemd-flag-557690 | jenkins | v1.33.1 | 19 Aug 24 12:18 UTC | 19 Aug 24 12:20 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-320395                | offline-crio-320395       | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:19 UTC |
	| start   | -p cert-expiration-497658             | cert-expiration-497658    | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:20 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:20 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-357956             | running-upgrade-357956    | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:21 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-557690 ssh cat     | force-systemd-flag-557690 | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:20 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-557690          | force-systemd-flag-557690 | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:20 UTC |
	| start   | -p cert-options-294561                | cert-options-294561       | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:21 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:20 UTC |
	| start   | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:20 UTC | 19 Aug 24 12:21 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-357956             | running-upgrade-357956    | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	| start   | -p pause-732494 --memory=2048         | pause-732494              | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:22 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-294561 ssh               | cert-options-294561       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-294561 -- sudo        | cert-options-294561       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-294561                | cert-options-294561       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	| start   | -p kubernetes-upgrade-814177          | kubernetes-upgrade-814177 | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-340370 sudo           | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	| start   | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:22 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-732494                       | pause-732494              | jenkins | v1.33.1 | 19 Aug 24 12:22 UTC | 19 Aug 24 12:23 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-340370 sudo           | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:22 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-340370                | NoKubernetes-340370       | jenkins | v1.33.1 | 19 Aug 24 12:22 UTC | 19 Aug 24 12:22 UTC |
	| start   | -p stopped-upgrade-111717             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 12:22 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:22:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:22:34.612118  152185 out.go:296] Setting OutFile to fd 1 ...
	I0819 12:22:34.612233  152185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0819 12:22:34.612236  152185 out.go:309] Setting ErrFile to fd 2...
	I0819 12:22:34.612240  152185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0819 12:22:34.612694  152185 root.go:329] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:22:34.612978  152185 out.go:303] Setting JSON to false
	I0819 12:22:34.613938  152185 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7501,"bootTime":1724062654,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 12:22:34.613999  152185 start.go:125] virtualization: kvm guest
	I0819 12:22:34.616364  152185 out.go:177] * [stopped-upgrade-111717] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0819 12:22:34.617694  152185 notify.go:193] Checking for updates...
	I0819 12:22:34.618864  152185 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:22:34.620034  152185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:22:34.621571  152185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:22:34.622904  152185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 12:22:34.624372  152185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:22:34.625821  152185 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig1993458851
	I0819 12:22:34.627525  152185 config.go:178] Loaded profile config "cert-expiration-497658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:22:34.627616  152185 config.go:178] Loaded profile config "kubernetes-upgrade-814177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 12:22:34.627749  152185 config.go:178] Loaded profile config "pause-732494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:22:34.627805  152185 driver.go:360] Setting default libvirt URI to qemu:///system
	I0819 12:22:34.665283  152185 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 12:22:34.666327  152185 start.go:284] selected driver: kvm2
	I0819 12:22:34.666335  152185 start.go:805] validating driver "kvm2" against <nil>
	I0819 12:22:34.666352  152185 start.go:816] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:22:34.667093  152185 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:22:34.667285  152185 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 12:22:34.683163  152185 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 12:22:34.683236  152185 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0819 12:22:34.683453  152185 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 12:22:34.683499  152185 cni.go:95] Creating CNI manager for ""
	I0819 12:22:34.683510  152185 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0819 12:22:34.683516  152185 start_flags.go:305] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 12:22:34.683525  152185 start_flags.go:310] config:
	{Name:stopped-upgrade-111717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-111717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0819 12:22:34.683639  152185 iso.go:128] acquiring lock: {Name:mk0a8ef9bbe457d4d7a65de8e0862a7215eaca7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:22:34.685765  152185 out.go:177] * Starting control plane node stopped-upgrade-111717 in cluster stopped-upgrade-111717
	I0819 12:22:31.546005  152005 machine.go:93] provisionDockerMachine start ...
	I0819 12:22:31.546036  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:31.546450  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.549220  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.549640  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.549670  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.549836  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.550023  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.550181  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.550335  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.550486  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.550718  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.550730  152005 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:22:31.651775  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-732494
	
	I0819 12:22:31.651812  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.652054  152005 buildroot.go:166] provisioning hostname "pause-732494"
	I0819 12:22:31.652087  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.652258  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.655130  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.655458  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.655495  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.655664  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.655870  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.656026  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.656159  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.656335  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.656578  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.656597  152005 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-732494 && echo "pause-732494" | sudo tee /etc/hostname
	I0819 12:22:31.772723  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-732494
	
	I0819 12:22:31.772758  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.775428  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.775831  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.775874  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.776095  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:31.776303  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.776471  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:31.776596  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:31.776797  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:31.777031  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:31.777058  152005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-732494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-732494/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-732494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:22:31.885199  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:22:31.885244  152005 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19476-99410/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-99410/.minikube}
	I0819 12:22:31.885294  152005 buildroot.go:174] setting up certificates
	I0819 12:22:31.885311  152005 provision.go:84] configureAuth start
	I0819 12:22:31.885327  152005 main.go:141] libmachine: (pause-732494) Calling .GetMachineName
	I0819 12:22:31.885632  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:31.888635  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.889065  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.889095  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.889187  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:31.892060  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.892525  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:31.892555  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:31.892706  152005 provision.go:143] copyHostCerts
	I0819 12:22:31.892778  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem, removing ...
	I0819 12:22:31.892796  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem
	I0819 12:22:31.892870  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/ca.pem (1082 bytes)
	I0819 12:22:31.893002  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem, removing ...
	I0819 12:22:31.893013  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem
	I0819 12:22:31.893042  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/cert.pem (1123 bytes)
	I0819 12:22:31.893147  152005 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem, removing ...
	I0819 12:22:31.893163  152005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem
	I0819 12:22:31.893191  152005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-99410/.minikube/key.pem (1679 bytes)
	I0819 12:22:31.893277  152005 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem org=jenkins.pause-732494 san=[127.0.0.1 192.168.39.24 localhost minikube pause-732494]
	I0819 12:22:32.197776  152005 provision.go:177] copyRemoteCerts
	I0819 12:22:32.197836  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:22:32.197862  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:32.200913  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.201260  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:32.201305  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.201443  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:32.201726  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.202016  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:32.202206  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:32.281680  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:22:32.308734  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:22:32.337096  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:22:32.369669  152005 provision.go:87] duration metric: took 484.343284ms to configureAuth
	I0819 12:22:32.369704  152005 buildroot.go:189] setting minikube options for container-runtime
	I0819 12:22:32.369983  152005 config.go:182] Loaded profile config "pause-732494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:22:32.370079  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:32.372952  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.373295  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:32.373323  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:32.373530  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:32.373766  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.373971  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:32.374187  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:32.374386  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:32.374641  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:32.374667  152005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 12:22:34.687233  152185 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0819 12:22:34.687269  152185 preload.go:148] Found local preload: /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0819 12:22:34.687276  152185 cache.go:57] Caching tarball of preloaded images
	I0819 12:22:34.687994  152185 preload.go:174] Found /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 12:22:34.688017  152185 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.1 on crio
	I0819 12:22:34.688845  152185 profile.go:148] Saving config to /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/stopped-upgrade-111717/config.json ...
	I0819 12:22:34.688871  152185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/stopped-upgrade-111717/config.json: {Name:mka2933c60a70e49712a675dc62663660862c4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:34.689041  152185 cache.go:208] Successfully downloaded all kic artifacts
	I0819 12:22:34.689076  152185 start.go:352] acquiring machines lock for stopped-upgrade-111717: {Name:mk79269fbbd6912302c8df8ddb039d4e2f1a0790 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 12:22:38.100908  152185 start.go:356] acquired machines lock for "stopped-upgrade-111717" in 3.411810582s
	I0819 12:22:38.100971  152185 start.go:91] Provisioning new machine with config: &{Name:stopped-upgrade-111717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopp
ed-upgrade-111717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 12:22:38.101104  152185 start.go:131] createHost starting for "" (driver="kvm2")
	I0819 12:22:38.103838  152185 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 12:22:38.104026  152185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:22:38.104077  152185 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0819 12:22:38.120996  152185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0819 12:22:38.121466  152185 main.go:134] libmachine: () Calling .GetVersion
	I0819 12:22:38.122212  152185 main.go:134] libmachine: Using API Version  1
	I0819 12:22:38.122232  152185 main.go:134] libmachine: () Calling .SetConfigRaw
	I0819 12:22:38.122620  152185 main.go:134] libmachine: () Calling .GetMachineName
	I0819 12:22:38.122855  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .GetMachineName
	I0819 12:22:38.122998  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .DriverName
	I0819 12:22:38.123145  152185 start.go:165] libmachine.API.Create for "stopped-upgrade-111717" (driver="kvm2")
	I0819 12:22:38.123172  152185 client.go:168] LocalClient.Create starting
	I0819 12:22:38.123208  152185 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem
	I0819 12:22:38.123243  152185 main.go:134] libmachine: Decoding PEM data...
	I0819 12:22:38.123260  152185 main.go:134] libmachine: Parsing certificate...
	I0819 12:22:38.123324  152185 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem
	I0819 12:22:38.123348  152185 main.go:134] libmachine: Decoding PEM data...
	I0819 12:22:38.123361  152185 main.go:134] libmachine: Parsing certificate...
	I0819 12:22:38.123403  152185 main.go:134] libmachine: Running pre-create checks...
	I0819 12:22:38.123414  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .PreCreateCheck
	I0819 12:22:38.123796  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .GetConfigRaw
	I0819 12:22:38.124229  152185 main.go:134] libmachine: Creating machine...
	I0819 12:22:38.124236  152185 main.go:134] libmachine: (stopped-upgrade-111717) Calling .Create
	I0819 12:22:38.124439  152185 main.go:134] libmachine: (stopped-upgrade-111717) Creating KVM machine...
	I0819 12:22:38.125744  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | found existing default KVM network
	I0819 12:22:38.127139  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.126960  152225 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:b4:c0} reservation:<nil>}
	I0819 12:22:38.128230  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.128115  152225 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:ea:96} reservation:<nil>}
	I0819 12:22:38.129547  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.129462  152225 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b6a0}
	I0819 12:22:38.129568  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | created network xml: 
	I0819 12:22:38.129578  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | <network>
	I0819 12:22:38.129585  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   <name>mk-stopped-upgrade-111717</name>
	I0819 12:22:38.129602  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   <dns enable='no'/>
	I0819 12:22:38.129612  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   
	I0819 12:22:38.129622  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0819 12:22:38.129631  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |     <dhcp>
	I0819 12:22:38.129641  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0819 12:22:38.129649  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |     </dhcp>
	I0819 12:22:38.129656  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   </ip>
	I0819 12:22:38.129663  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG |   
	I0819 12:22:38.129668  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | </network>
	I0819 12:22:38.129677  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | 
	I0819 12:22:38.135408  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | trying to create private KVM network mk-stopped-upgrade-111717 192.168.61.0/24...
	I0819 12:22:38.212364  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | private KVM network mk-stopped-upgrade-111717 192.168.61.0/24 created
	I0819 12:22:38.212421  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.212323  152225 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:22:38.212444  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting up store path in /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717 ...
	I0819 12:22:38.212468  152185 main.go:134] libmachine: (stopped-upgrade-111717) Building disk image from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso
	I0819 12:22:38.212487  152185 main.go:134] libmachine: (stopped-upgrade-111717) Downloading /home/jenkins/minikube-integration/19476-99410/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso...
	I0819 12:22:38.420755  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.420591  152225 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717/id_rsa...
	I0819 12:22:38.514079  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.513961  152225 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717/stopped-upgrade-111717.rawdisk...
	I0819 12:22:38.514098  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Writing magic tar header
	I0819 12:22:38.514120  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Writing SSH key tar header
	I0819 12:22:38.514128  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:38.514077  152225 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717 ...
	I0819 12:22:38.514222  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717
	I0819 12:22:38.514228  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube/machines
	I0819 12:22:38.514237  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717 (perms=drwx------)
	I0819 12:22:38.514249  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube/machines (perms=drwxr-xr-x)
	I0819 12:22:38.514258  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410/.minikube (perms=drwxr-xr-x)
	I0819 12:22:38.514264  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 12:22:38.514274  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19476-99410
	I0819 12:22:38.514280  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 12:22:38.514288  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home/jenkins
	I0819 12:22:38.514293  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Checking permissions on dir: /home
	I0819 12:22:38.514335  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration/19476-99410 (perms=drwxrwxr-x)
	I0819 12:22:38.514359  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | Skipping /home - not owner
	I0819 12:22:38.514372  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 12:22:38.514389  152185 main.go:134] libmachine: (stopped-upgrade-111717) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 12:22:38.514398  152185 main.go:134] libmachine: (stopped-upgrade-111717) Creating domain...
	I0819 12:22:38.515495  152185 main.go:134] libmachine: (stopped-upgrade-111717) define libvirt domain using xml: 
	I0819 12:22:38.515518  152185 main.go:134] libmachine: (stopped-upgrade-111717) <domain type='kvm'>
	I0819 12:22:38.515531  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <name>stopped-upgrade-111717</name>
	I0819 12:22:38.515540  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <memory unit='MiB'>2200</memory>
	I0819 12:22:38.515548  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <vcpu>2</vcpu>
	I0819 12:22:38.515555  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <features>
	I0819 12:22:38.515564  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <acpi/>
	I0819 12:22:38.515571  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <apic/>
	I0819 12:22:38.515579  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <pae/>
	I0819 12:22:38.515585  152185 main.go:134] libmachine: (stopped-upgrade-111717)     
	I0819 12:22:38.515593  152185 main.go:134] libmachine: (stopped-upgrade-111717)   </features>
	I0819 12:22:38.515601  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <cpu mode='host-passthrough'>
	I0819 12:22:38.515610  152185 main.go:134] libmachine: (stopped-upgrade-111717)   
	I0819 12:22:38.515618  152185 main.go:134] libmachine: (stopped-upgrade-111717)   </cpu>
	I0819 12:22:38.515626  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <os>
	I0819 12:22:38.515640  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <type>hvm</type>
	I0819 12:22:38.515649  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <boot dev='cdrom'/>
	I0819 12:22:38.515657  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <boot dev='hd'/>
	I0819 12:22:38.515667  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <bootmenu enable='no'/>
	I0819 12:22:38.515674  152185 main.go:134] libmachine: (stopped-upgrade-111717)   </os>
	I0819 12:22:38.515681  152185 main.go:134] libmachine: (stopped-upgrade-111717)   <devices>
	I0819 12:22:38.515690  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <disk type='file' device='cdrom'>
	I0819 12:22:38.515702  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717/boot2docker.iso'/>
	I0819 12:22:38.515710  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <target dev='hdc' bus='scsi'/>
	I0819 12:22:38.515760  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <readonly/>
	I0819 12:22:38.515776  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </disk>
	I0819 12:22:38.515789  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <disk type='file' device='disk'>
	I0819 12:22:38.515795  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 12:22:38.515804  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <source file='/home/jenkins/minikube-integration/19476-99410/.minikube/machines/stopped-upgrade-111717/stopped-upgrade-111717.rawdisk'/>
	I0819 12:22:38.515810  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <target dev='hda' bus='virtio'/>
	I0819 12:22:38.515815  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </disk>
	I0819 12:22:38.515820  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <interface type='network'>
	I0819 12:22:38.515826  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <source network='mk-stopped-upgrade-111717'/>
	I0819 12:22:38.515831  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <model type='virtio'/>
	I0819 12:22:38.515836  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </interface>
	I0819 12:22:38.515841  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <interface type='network'>
	I0819 12:22:38.515847  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <source network='default'/>
	I0819 12:22:38.515855  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <model type='virtio'/>
	I0819 12:22:38.515865  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </interface>
	I0819 12:22:38.515873  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <serial type='pty'>
	I0819 12:22:38.515880  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <target port='0'/>
	I0819 12:22:38.515890  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </serial>
	I0819 12:22:38.515895  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <console type='pty'>
	I0819 12:22:38.515900  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <target type='serial' port='0'/>
	I0819 12:22:38.515905  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </console>
	I0819 12:22:38.515910  152185 main.go:134] libmachine: (stopped-upgrade-111717)     <rng model='virtio'>
	I0819 12:22:38.515916  152185 main.go:134] libmachine: (stopped-upgrade-111717)       <backend model='random'>/dev/random</backend>
	I0819 12:22:38.515920  152185 main.go:134] libmachine: (stopped-upgrade-111717)     </rng>
	I0819 12:22:38.515925  152185 main.go:134] libmachine: (stopped-upgrade-111717)     
	I0819 12:22:38.515929  152185 main.go:134] libmachine: (stopped-upgrade-111717)     
	I0819 12:22:38.515934  152185 main.go:134] libmachine: (stopped-upgrade-111717)   </devices>
	I0819 12:22:38.515938  152185 main.go:134] libmachine: (stopped-upgrade-111717) </domain>
	I0819 12:22:38.515946  152185 main.go:134] libmachine: (stopped-upgrade-111717) 
	I0819 12:22:38.520589  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:2e:3e:7d in network default
	I0819 12:22:38.521194  152185 main.go:134] libmachine: (stopped-upgrade-111717) Ensuring networks are active...
	I0819 12:22:38.521256  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:38.521965  152185 main.go:134] libmachine: (stopped-upgrade-111717) Ensuring network default is active
	I0819 12:22:38.522273  152185 main.go:134] libmachine: (stopped-upgrade-111717) Ensuring network mk-stopped-upgrade-111717 is active
	I0819 12:22:38.522726  152185 main.go:134] libmachine: (stopped-upgrade-111717) Getting domain xml...
	I0819 12:22:38.523400  152185 main.go:134] libmachine: (stopped-upgrade-111717) Creating domain...
	I0819 12:22:37.874249  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 12:22:37.874286  152005 machine.go:96] duration metric: took 6.328260286s to provisionDockerMachine
	I0819 12:22:37.874305  152005 start.go:293] postStartSetup for "pause-732494" (driver="kvm2")
	I0819 12:22:37.874327  152005 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:22:37.874357  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:37.874822  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:22:37.874853  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:37.878095  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:37.878564  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:37.878590  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:37.878780  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:37.878991  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:37.879159  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:37.879310  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:37.959139  152005 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:22:37.963530  152005 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 12:22:37.963575  152005 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/addons for local assets ...
	I0819 12:22:37.963662  152005 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-99410/.minikube/files for local assets ...
	I0819 12:22:37.963784  152005 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem -> 1066322.pem in /etc/ssl/certs
	I0819 12:22:37.963908  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:22:37.973387  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:22:37.997255  152005 start.go:296] duration metric: took 122.932444ms for postStartSetup
	I0819 12:22:37.997302  152005 fix.go:56] duration metric: took 6.47677026s for fixHost
	I0819 12:22:37.997324  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.000043  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.000434  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.000464  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.000645  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.000845  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.001023  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.001221  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.001377  152005 main.go:141] libmachine: Using SSH client type: native
	I0819 12:22:38.001610  152005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0819 12:22:38.001627  152005 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 12:22:38.100660  152005 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724070158.089732951
	
	I0819 12:22:38.100685  152005 fix.go:216] guest clock: 1724070158.089732951
	I0819 12:22:38.100694  152005 fix.go:229] Guest: 2024-08-19 12:22:38.089732951 +0000 UTC Remote: 2024-08-19 12:22:37.997306217 +0000 UTC m=+16.826858779 (delta=92.426734ms)
	I0819 12:22:38.100740  152005 fix.go:200] guest clock delta is within tolerance: 92.426734ms
	I0819 12:22:38.100747  152005 start.go:83] releasing machines lock for "pause-732494", held for 6.580248803s
	I0819 12:22:38.100776  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.101106  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:38.103703  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.104187  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.104221  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.104404  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105033  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105240  152005 main.go:141] libmachine: (pause-732494) Calling .DriverName
	I0819 12:22:38.105344  152005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:22:38.105402  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.105438  152005 ssh_runner.go:195] Run: cat /version.json
	I0819 12:22:38.105461  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHHostname
	I0819 12:22:38.108213  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108474  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108533  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.108562  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108718  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.108855  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:38.108877  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:38.108922  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.109024  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHPort
	I0819 12:22:38.109093  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.109194  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHKeyPath
	I0819 12:22:38.109283  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:38.109321  152005 main.go:141] libmachine: (pause-732494) Calling .GetSSHUsername
	I0819 12:22:38.109440  152005 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/pause-732494/id_rsa Username:docker}
	I0819 12:22:38.185742  152005 ssh_runner.go:195] Run: systemctl --version
	I0819 12:22:38.206055  152005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 12:22:38.361478  152005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 12:22:38.367061  152005 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 12:22:38.367127  152005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:22:38.377122  152005 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:22:38.377164  152005 start.go:495] detecting cgroup driver to use...
	I0819 12:22:38.377244  152005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 12:22:38.393342  152005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 12:22:38.413251  152005 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:22:38.413325  152005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:22:38.429264  152005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:22:38.447906  152005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:22:38.584508  152005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:22:38.718505  152005 docker.go:233] disabling docker service ...
	I0819 12:22:38.718594  152005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:22:38.735006  152005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:22:38.748877  152005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:22:38.889235  152005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:22:39.021023  152005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:22:39.036456  152005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:22:39.056923  152005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 12:22:39.057001  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.068804  152005 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 12:22:39.068892  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.079491  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.091040  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.101810  152005 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:22:39.112971  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.123661  152005 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.136119  152005 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 12:22:39.146744  152005 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:22:39.156992  152005 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:22:39.167623  152005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:22:39.307935  152005 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 12:22:39.697080  152005 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 12:22:39.697161  152005 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 12:22:39.708704  152005 start.go:563] Will wait 60s for crictl version
	I0819 12:22:39.708783  152005 ssh_runner.go:195] Run: which crictl
	I0819 12:22:39.724808  152005 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:22:39.912391  152005 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 12:22:39.912538  152005 ssh_runner.go:195] Run: crio --version
	I0819 12:22:40.192674  152005 ssh_runner.go:195] Run: crio --version
	I0819 12:22:40.415606  152005 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 12:22:40.416872  152005 main.go:141] libmachine: (pause-732494) Calling .GetIP
	I0819 12:22:40.420351  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:40.420702  152005 main.go:141] libmachine: (pause-732494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ab:ef", ip: ""} in network mk-pause-732494: {Iface:virbr2 ExpiryTime:2024-08-19 13:21:40 +0000 UTC Type:0 Mac:52:54:00:78:ab:ef Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-732494 Clientid:01:52:54:00:78:ab:ef}
	I0819 12:22:40.420732  152005 main.go:141] libmachine: (pause-732494) DBG | domain pause-732494 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:ab:ef in network mk-pause-732494
	I0819 12:22:40.420980  152005 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 12:22:40.440755  152005 kubeadm.go:883] updating cluster {Name:pause-732494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:22:40.440940  152005 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 12:22:40.441009  152005 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:22:40.564470  152005 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:22:40.564505  152005 crio.go:433] Images already preloaded, skipping extraction
	I0819 12:22:40.564571  152005 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:22:40.670891  152005 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 12:22:40.670915  152005 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:22:40.670923  152005 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.31.0 crio true true} ...
	I0819 12:22:40.671031  152005 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-732494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:22:40.671111  152005 ssh_runner.go:195] Run: crio config
	I0819 12:22:40.751848  152005 cni.go:84] Creating CNI manager for ""
	I0819 12:22:40.751875  152005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 12:22:40.751889  152005 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:22:40.751921  152005 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-732494 NodeName:pause-732494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:22:40.752093  152005 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-732494"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:22:40.752168  152005 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:22:40.764821  152005 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:22:40.764907  152005 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:22:40.778039  152005 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 12:22:40.801052  152005 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:22:40.826516  152005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 12:22:40.860431  152005 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0819 12:22:40.867074  152005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:22:41.153303  152005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:22:41.178057  152005 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494 for IP: 192.168.39.24
	I0819 12:22:41.178082  152005 certs.go:194] generating shared ca certs ...
	I0819 12:22:41.178103  152005 certs.go:226] acquiring lock for ca certs: {Name:mkea0a5571fe2f86ea35780a8a0585cdcb5f186c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:22:41.178290  152005 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key
	I0819 12:22:41.178351  152005 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key
	I0819 12:22:41.178368  152005 certs.go:256] generating profile certs ...
	I0819 12:22:41.178484  152005 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/client.key
	I0819 12:22:41.178565  152005 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.key.96bc570c
	I0819 12:22:41.178616  152005 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.key
	I0819 12:22:41.178769  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem (1338 bytes)
	W0819 12:22:41.178814  152005 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632_empty.pem, impossibly tiny 0 bytes
	I0819 12:22:41.178828  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:22:41.178862  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:22:41.178898  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:22:41.178931  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/certs/key.pem (1679 bytes)
	I0819 12:22:41.178987  152005 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem (1708 bytes)
	I0819 12:22:41.179867  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:22:41.215204  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:22:39.833804  152185 main.go:134] libmachine: (stopped-upgrade-111717) Waiting to get IP...
	I0819 12:22:39.834886  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:39.835485  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:39.835542  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:39.835460  152225 retry.go:31] will retry after 237.082125ms: waiting for machine to come up
	I0819 12:22:40.074371  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:40.075108  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:40.075134  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:40.075064  152225 retry.go:31] will retry after 295.061023ms: waiting for machine to come up
	I0819 12:22:40.371783  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:40.372319  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:40.372341  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:40.372278  152225 retry.go:31] will retry after 361.181319ms: waiting for machine to come up
	I0819 12:22:40.734926  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:40.735439  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:40.735462  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:40.735392  152225 retry.go:31] will retry after 377.649372ms: waiting for machine to come up
	I0819 12:22:41.115222  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:41.115830  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:41.115851  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:41.115773  152225 retry.go:31] will retry after 695.776357ms: waiting for machine to come up
	I0819 12:22:41.812870  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:41.813366  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:41.813386  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:41.813315  152225 retry.go:31] will retry after 598.994886ms: waiting for machine to come up
	I0819 12:22:42.414129  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:42.414752  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:42.414777  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:42.414692  152225 retry.go:31] will retry after 988.941212ms: waiting for machine to come up
	I0819 12:22:43.405260  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:43.405805  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:43.405828  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:43.405741  152225 retry.go:31] will retry after 1.097996222s: waiting for machine to come up
	I0819 12:22:44.505029  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | domain stopped-upgrade-111717 has defined MAC address 52:54:00:d6:a1:f3 in network mk-stopped-upgrade-111717
	I0819 12:22:44.505654  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | unable to find current IP address of domain stopped-upgrade-111717 in network mk-stopped-upgrade-111717
	I0819 12:22:44.505672  152185 main.go:134] libmachine: (stopped-upgrade-111717) DBG | I0819 12:22:44.505602  152225 retry.go:31] will retry after 1.420200785s: waiting for machine to come up
	I0819 12:22:41.258684  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:22:41.305321  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:22:41.332940  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 12:22:41.365978  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:22:41.393037  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:22:41.419348  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/pause-732494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:22:41.450462  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:22:41.479646  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/certs/106632.pem --> /usr/share/ca-certificates/106632.pem (1338 bytes)
	I0819 12:22:41.514370  152005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/ssl/certs/1066322.pem --> /usr/share/ca-certificates/1066322.pem (1708 bytes)
	I0819 12:22:41.546927  152005 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:22:41.572743  152005 ssh_runner.go:195] Run: openssl version
	I0819 12:22:41.578664  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:22:41.590058  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.596800  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 10:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.596868  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:22:41.602769  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:22:41.618424  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106632.pem && ln -fs /usr/share/ca-certificates/106632.pem /etc/ssl/certs/106632.pem"
	I0819 12:22:41.631176  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.636071  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:26 /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.636154  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106632.pem
	I0819 12:22:41.641886  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/106632.pem /etc/ssl/certs/51391683.0"
	I0819 12:22:41.652357  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1066322.pem && ln -fs /usr/share/ca-certificates/1066322.pem /etc/ssl/certs/1066322.pem"
	I0819 12:22:41.664754  152005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.669272  152005 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:26 /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.669356  152005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1066322.pem
	I0819 12:22:41.675323  152005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1066322.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:22:41.685164  152005 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:22:41.689760  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:22:41.695350  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:22:41.701889  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:22:41.707860  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:22:41.713486  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:22:41.719094  152005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:22:41.726907  152005 kubeadm.go:392] StartCluster: {Name:pause-732494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-732494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:22:41.727083  152005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 12:22:41.727145  152005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:22:41.813855  152005 cri.go:89] found id: "f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d"
	I0819 12:22:41.813881  152005 cri.go:89] found id: "d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d"
	I0819 12:22:41.813888  152005 cri.go:89] found id: "e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60"
	I0819 12:22:41.813892  152005 cri.go:89] found id: "0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59"
	I0819 12:22:41.813897  152005 cri.go:89] found id: "641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a"
	I0819 12:22:41.813902  152005 cri.go:89] found id: "c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612"
	I0819 12:22:41.813906  152005 cri.go:89] found id: "e5ae2af15481ac5157a34eeb2c75068066569366d3e36ae802b0422fcb487d5f"
	I0819 12:22:41.813912  152005 cri.go:89] found id: "71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20"
	I0819 12:22:41.813917  152005 cri.go:89] found id: "65c4c92bbc54f2f7bf61f448bcfdc2da3729dc17648d995c3ff9d60dfa47695e"
	I0819 12:22:41.813925  152005 cri.go:89] found id: "9962cad310005442266bfd1020886340d9bf21d8ef66a09a62236768f681bb7d"
	I0819 12:22:41.813930  152005 cri.go:89] found id: "6edfdd09f92c9948c43a5a94428c6cf6442587d0219b11057bbc6093ee3c00ff"
	I0819 12:22:41.813934  152005 cri.go:89] found id: "b48be31cf132a32ee25c0f7f53defa53fb04dd4b0288a1d3859f0d01a25b03ce"
	I0819 12:22:41.813939  152005 cri.go:89] found id: "2bb9efcb02cd6e18c239d0ebd154a76fef9d91391a48cc48f377111051f3f1e2"
	I0819 12:22:41.813946  152005 cri.go:89] found id: ""
	I0819 12:22:41.814000  152005 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.860129241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070209860107555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b3a3641-afaf-4738-a08b-90fde497f855 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.860684819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b09b263-9229-4fda-a2f7-03fd61e147f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.860780284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b09b263-9229-4fda-a2f7-03fd61e147f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.861070278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070188044038410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070184408627339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070184416361624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070184421851334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070184378365435,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070181806017331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070161013361508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070160286354380,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070160221136026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070160173816925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-73249
4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070160027600255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070160090947392,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20,PodSandboxId:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070129367789151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-
6452ab172aae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b09b263-9229-4fda-a2f7-03fd61e147f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.899014513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d16bc7e3-ae7d-4be8-8106-5c892aba12f3 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.899111747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d16bc7e3-ae7d-4be8-8106-5c892aba12f3 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.900494155Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce0898bc-051e-489c-bfae-0c25acfa7575 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.901083702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070209901058123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce0898bc-051e-489c-bfae-0c25acfa7575 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.901567648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f1adc6f-e6d4-499b-9ecf-d9b5995ef989 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.901629119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f1adc6f-e6d4-499b-9ecf-d9b5995ef989 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.905962532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070188044038410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070184408627339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070184416361624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070184421851334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070184378365435,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070181806017331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070161013361508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070160286354380,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070160221136026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070160173816925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-73249
4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070160027600255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070160090947392,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20,PodSandboxId:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070129367789151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-
6452ab172aae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f1adc6f-e6d4-499b-9ecf-d9b5995ef989 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.947499230Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f54356cb-c29e-4056-a588-c31216f13d7e name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.947587810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f54356cb-c29e-4056-a588-c31216f13d7e name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.948489483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1f72cca-be5b-4f3d-9714-6b30c5610e9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.949076278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070209949053209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1f72cca-be5b-4f3d-9714-6b30c5610e9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.949556665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37ebbb99-cc21-4438-8220-e2393c2170ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.949648733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37ebbb99-cc21-4438-8220-e2393c2170ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.949948375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070188044038410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070184408627339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070184416361624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070184421851334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070184378365435,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070181806017331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070161013361508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070160286354380,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070160221136026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070160173816925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-73249
4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070160027600255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070160090947392,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20,PodSandboxId:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070129367789151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-
6452ab172aae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37ebbb99-cc21-4438-8220-e2393c2170ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.989667315Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e35123c9-eb8c-4084-8ccf-231650cd7f11 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.989869317Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e35123c9-eb8c-4084-8ccf-231650cd7f11 name=/runtime.v1.RuntimeService/Version
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.991380935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=017f6550-c7d5-4fed-9f0a-0b9e013f3f73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.991804831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070209991778503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=017f6550-c7d5-4fed-9f0a-0b9e013f3f73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.992351701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a220c0a-84cf-47d9-886a-f78e1b024141 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.992419314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a220c0a-84cf-47d9-886a-f78e1b024141 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 12:23:29 pause-732494 crio[2294]: time="2024-08-19 12:23:29.992688619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724070188044038410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724070184408627339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724070184416361624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724070184421851334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724070184378365435,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724070181806017331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d,PodSandboxId:1521c494ddb4d051f82a73651c27e0791e4bd29737e72f6f1314e3af2319a027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070161013361508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-njb6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6d1896-a35f-4c32-af3c-0cfcad6829a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d,PodSandboxId:0d620d042fc8d6afc748db24f1e3d0ca7886af8a0c173dd84d3952bdc47f4672,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724070160286354380,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07c695b284933de9b58e4af8ab9fe584,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60,PodSandboxId:b5f58edd9ed7ddabf338cccb91f14ac18e3ec9f587f9dd82aedf20ac24fb0557,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724070160221136026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-732494,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef9fe889f780f0982931809bbf7d2ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59,PodSandboxId:e4d65ecc1c7223855c73b559fa8f6f2ad7605e3dca3f53f1e00f5b74d7bbef5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724070160173816925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-73249
4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4a1243f79a8e11c8ec728acd096082,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612,PodSandboxId:1ffd10874e037e4e4fcd6b216fa60209e1f6bee9750372a45c6ff802a4abfe89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724070160027600255,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wpw2,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 970f3a70-eaec-4fa5-805b-d73e1d0b5bd5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a,PodSandboxId:237fab524d8540f19afdbb4499d4485dec65050ba9eaf58fd115886d43dcc723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724070160090947392,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-732494,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: a65855e373718b09045fe1e68c3ae64a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20,PodSandboxId:b8ece353147142272aa444a998cff4ce753f98232648af5c5c3bfa97c594fada,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724070129367789151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ftmpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5264410a-eee3-42c3-9a5c-
6452ab172aae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a220c0a-84cf-47d9-886a-f78e1b024141 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5472142059dc1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   22 seconds ago       Running             kube-proxy                2                   1ffd10874e037       kube-proxy-4wpw2
	2ab01352ae9be       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   25 seconds ago       Running             kube-apiserver            2                   237fab524d854       kube-apiserver-pause-732494
	11fa5592e91c9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   25 seconds ago       Running             kube-scheduler            2                   0d620d042fc8d       kube-scheduler-pause-732494
	8877c6b07f850       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago       Running             etcd                      2                   b5f58edd9ed7d       etcd-pause-732494
	5e94100bd01e2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   25 seconds ago       Running             kube-controller-manager   2                   e4d65ecc1c722       kube-controller-manager-pause-732494
	ee34eb433386d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   28 seconds ago       Running             coredns                   2                   1521c494ddb4d       coredns-6f6b679f8f-njb6z
	f709bf5da6fd4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   49 seconds ago       Exited              coredns                   1                   1521c494ddb4d       coredns-6f6b679f8f-njb6z
	d046b6b14ce67       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   49 seconds ago       Exited              kube-scheduler            1                   0d620d042fc8d       kube-scheduler-pause-732494
	e3d093640f912       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   49 seconds ago       Exited              etcd                      1                   b5f58edd9ed7d       etcd-pause-732494
	0125c48d70b7f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   49 seconds ago       Exited              kube-controller-manager   1                   e4d65ecc1c722       kube-controller-manager-pause-732494
	641b6edcbd83a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   49 seconds ago       Exited              kube-apiserver            1                   237fab524d854       kube-apiserver-pause-732494
	c60e87585c18b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   50 seconds ago       Exited              kube-proxy                1                   1ffd10874e037       kube-proxy-4wpw2
	71ea972f68efb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   b8ece35314714       coredns-6f6b679f8f-ftmpt
	
	
	==> coredns [71ea972f68efba523d2b778fd4605656e17625fd8ec7ce7a9612db8f732c3c20] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee34eb433386d39df2a34a246ae9285341bd41cdcc5cb9467f0a669c55745dd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51740 - 3507 "HINFO IN 5015129589594737323.4920509428964378509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018596609s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=458": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=458": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=456": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=456": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=458": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=458": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [f709bf5da6fd4d408a7e54b1ee6a528e5f32e8b0ad2d49fec9ff5fc186598c5d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47571 - 13684 "HINFO IN 89910630764924702.7204132197362725527. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.074883099s
	
	
	==> describe nodes <==
	Name:               pause-732494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-732494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=pause-732494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_22_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:22:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-732494
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:23:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:23:07 +0000   Mon, 19 Aug 2024 12:21:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:23:07 +0000   Mon, 19 Aug 2024 12:21:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:23:07 +0000   Mon, 19 Aug 2024 12:21:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:23:07 +0000   Mon, 19 Aug 2024 12:22:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    pause-732494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0dad5176af54959be2b433942849eb5
	  System UUID:                c0dad517-6af5-4959-be2b-433942849eb5
	  Boot ID:                    235f441c-14fa-4c4e-836a-10850720ff7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-njb6z                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     82s
	  kube-system                 etcd-pause-732494                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         87s
	  kube-system                 kube-apiserver-pause-732494             250m (12%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-controller-manager-pause-732494    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-4wpw2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-732494             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 44s                kube-proxy       
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node pause-732494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node pause-732494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet          Node pause-732494 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                kubelet          Node pause-732494 status is now: NodeReady
	  Normal  RegisteredNode           83s                node-controller  Node pause-732494 event: Registered Node pause-732494 in Controller
	  Normal  RegisteredNode           43s                node-controller  Node pause-732494 event: Registered Node pause-732494 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26s (x8 over 27s)  kubelet          Node pause-732494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 27s)  kubelet          Node pause-732494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 27s)  kubelet          Node pause-732494 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20s                node-controller  Node pause-732494 event: Registered Node pause-732494 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.789431] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.063638] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065002] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.189384] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.122451] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.295195] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.136912] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.759014] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.065578] kauditd_printk_skb: 158 callbacks suppressed
	[Aug19 12:22] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.089547] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.286281] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +0.101667] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.550639] kauditd_printk_skb: 98 callbacks suppressed
	[ +18.325564] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.131583] systemd-fstab-generator[2225]: Ignoring "noauto" option for root device
	[  +0.169896] systemd-fstab-generator[2239]: Ignoring "noauto" option for root device
	[  +0.139868] systemd-fstab-generator[2251]: Ignoring "noauto" option for root device
	[  +0.282909] systemd-fstab-generator[2279]: Ignoring "noauto" option for root device
	[  +1.745939] systemd-fstab-generator[2906]: Ignoring "noauto" option for root device
	[  +4.559311] kauditd_printk_skb: 203 callbacks suppressed
	[Aug19 12:23] systemd-fstab-generator[3400]: Ignoring "noauto" option for root device
	[  +4.645021] kauditd_printk_skb: 45 callbacks suppressed
	[ +16.199377] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	
	
	==> etcd [8877c6b07f8505ecb2154decf63f9b2c033210e5e5bd4969754f75fe36c88cf6] <==
	{"level":"info","ts":"2024-08-19T12:23:12.780548Z","caller":"traceutil/trace.go:171","msg":"trace[476268379] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:521; }","duration":"419.015699ms","start":"2024-08-19T12:23:12.361498Z","end":"2024-08-19T12:23:12.780513Z","steps":["trace[476268379] 'agreement among raft nodes before linearized reading'  (duration: 415.767492ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:12.780637Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.361469Z","time spent":"419.153468ms","remote":"127.0.0.1:38840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4136,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-08-19T12:23:12.780827Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.358736Z","time spent":"422.082467ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5862,"request content":"key:\"/registry/pods/kube-system/etcd-pause-732494\" "}
	{"level":"warn","ts":"2024-08-19T12:23:13.556300Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"460.19134ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654388325119112165 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:519 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T12:23:13.556578Z","caller":"traceutil/trace.go:171","msg":"trace[963687503] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"758.327102ms","start":"2024-08-19T12:23:12.798216Z","end":"2024-08-19T12:23:13.556543Z","steps":["trace[963687503] 'process raft request'  (duration: 758.251797ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.556691Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.798203Z","time spent":"758.449846ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5132,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" mod_revision:487 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" value_size:5073 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.556922Z","caller":"traceutil/trace.go:171","msg":"trace[286289547] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"761.02589ms","start":"2024-08-19T12:23:12.795885Z","end":"2024-08-19T12:23:13.556911Z","steps":["trace[286289547] 'process raft request'  (duration: 300.165079ms)","trace[286289547] 'compare'  (duration: 459.873177ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T12:23:13.556998Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.795865Z","time spent":"761.101207ms","remote":"127.0.0.1:38840","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:519 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.557182Z","caller":"traceutil/trace.go:171","msg":"trace[277296740] linearizableReadLoop","detail":"{readStateIndex:559; appliedIndex:558; }","duration":"759.777261ms","start":"2024-08-19T12:23:12.797397Z","end":"2024-08-19T12:23:13.557175Z","steps":["trace[277296740] 'read index received'  (duration: 298.66299ms)","trace[277296740] 'applied index is now lower than readState.Index'  (duration: 461.113221ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T12:23:13.557299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"759.897068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-732494\" ","response":"range_response_count:1 size:5840"}
	{"level":"info","ts":"2024-08-19T12:23:13.557338Z","caller":"traceutil/trace.go:171","msg":"trace[916470048] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-732494; range_end:; response_count:1; response_revision:523; }","duration":"759.938486ms","start":"2024-08-19T12:23:12.797394Z","end":"2024-08-19T12:23:13.557332Z","steps":["trace[916470048] 'agreement among raft nodes before linearized reading'  (duration: 759.848682ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.557373Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:12.797362Z","time spent":"760.006205ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5862,"request content":"key:\"/registry/pods/kube-system/etcd-pause-732494\" "}
	{"level":"warn","ts":"2024-08-19T12:23:13.957968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.016433ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654388325119112170 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" mod_revision:523 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" value_size:4895 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T12:23:13.958041Z","caller":"traceutil/trace.go:171","msg":"trace[1619200175] linearizableReadLoop","detail":"{readStateIndex:561; appliedIndex:560; }","duration":"384.660959ms","start":"2024-08-19T12:23:13.573371Z","end":"2024-08-19T12:23:13.958031Z","steps":["trace[1619200175] 'read index received'  (duration: 201.538039ms)","trace[1619200175] 'applied index is now lower than readState.Index'  (duration: 183.122181ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T12:23:13.958119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.745904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-732494\" ","response":"range_response_count:1 size:5840"}
	{"level":"info","ts":"2024-08-19T12:23:13.958147Z","caller":"traceutil/trace.go:171","msg":"trace[979662940] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-732494; range_end:; response_count:1; response_revision:524; }","duration":"384.774534ms","start":"2024-08-19T12:23:13.573367Z","end":"2024-08-19T12:23:13.958142Z","steps":["trace[979662940] 'agreement among raft nodes before linearized reading'  (duration: 384.69204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.958170Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.573342Z","time spent":"384.822353ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5862,"request content":"key:\"/registry/pods/kube-system/etcd-pause-732494\" "}
	{"level":"info","ts":"2024-08-19T12:23:13.958279Z","caller":"traceutil/trace.go:171","msg":"trace[1866670096] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"385.297251ms","start":"2024-08-19T12:23:13.572969Z","end":"2024-08-19T12:23:13.958266Z","steps":["trace[1866670096] 'process raft request'  (duration: 201.930255ms)","trace[1866670096] 'compare'  (duration: 182.926104ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T12:23:13.958519Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.572953Z","time spent":"385.522609ms","remote":"127.0.0.1:38562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4954,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" mod_revision:523 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" value_size:4895 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-njb6z\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.959151Z","caller":"traceutil/trace.go:171","msg":"trace[385061752] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"376.722751ms","start":"2024-08-19T12:23:13.582403Z","end":"2024-08-19T12:23:13.959126Z","steps":["trace[385061752] 'process raft request'  (duration: 376.188929ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.959241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.582389Z","time spent":"376.808721ms","remote":"127.0.0.1:38852","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3782,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-6f6b679f8f\" mod_revision:521 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-6f6b679f8f\" value_size:3722 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-6f6b679f8f\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.960922Z","caller":"traceutil/trace.go:171","msg":"trace[793178495] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"382.986765ms","start":"2024-08-19T12:23:13.577923Z","end":"2024-08-19T12:23:13.960910Z","steps":["trace[793178495] 'process raft request'  (duration: 380.641271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.962056Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.577909Z","time spent":"384.057397ms","remote":"127.0.0.1:38646","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-lvd7l\" mod_revision:520 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-lvd7l\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-lvd7l\" > >"}
	{"level":"info","ts":"2024-08-19T12:23:13.964757Z","caller":"traceutil/trace.go:171","msg":"trace[2021394293] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"390.986637ms","start":"2024-08-19T12:23:13.573458Z","end":"2024-08-19T12:23:13.964445Z","steps":["trace[2021394293] 'process raft request'  (duration: 385.056734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T12:23:13.965737Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T12:23:13.573450Z","time spent":"391.5717ms","remote":"127.0.0.1:38530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:518 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	
	
	==> etcd [e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60] <==
	{"level":"info","ts":"2024-08-19T12:22:42.898419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:22:42.898444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgPreVoteResp from 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2024-08-19T12:22:42.898458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T12:22:42.898464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2024-08-19T12:22:42.898472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T12:22:42.898479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2024-08-19T12:22:42.900891Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:pause-732494 ClientURLs:[https://192.168.39.24:2379]}","request-path":"/0/members/602226ed500416f5/attributes","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:22:42.901113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:22:42.901177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:22:42.901495Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:22:42.901534Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:22:42.902210Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:22:42.902185Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:22:42.902941Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2024-08-19T12:22:42.903374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:22:51.947368Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T12:22:51.947427Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-732494","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	{"level":"warn","ts":"2024-08-19T12:22:51.947543Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:22:51.947653Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:22:51.972633Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T12:22:51.972685Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T12:22:51.972777Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602226ed500416f5","current-leader-member-id":"602226ed500416f5"}
	{"level":"info","ts":"2024-08-19T12:22:51.979837Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-08-19T12:22:51.980012Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-08-19T12:22:51.980036Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-732494","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	
	
	==> kernel <==
	 12:23:30 up 1 min,  0 users,  load average: 1.04, 0.49, 0.18
	Linux pause-732494 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2ab01352ae9be4604feee7e0d92d6aff8d60f32cc2ba34d6d090518291eb5bce] <==
	I0819 12:23:07.641066       1 policy_source.go:224] refreshing policies
	I0819 12:23:07.669426       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:23:07.673492       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 12:23:07.674211       1 aggregator.go:171] initial CRD sync complete...
	I0819 12:23:07.674252       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 12:23:07.674262       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 12:23:07.674270       1 cache.go:39] Caches are synced for autoregister controller
	I0819 12:23:07.727049       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 12:23:07.728324       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 12:23:07.728550       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 12:23:07.728614       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 12:23:07.730555       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 12:23:07.731486       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 12:23:07.731513       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 12:23:07.760343       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 12:23:07.761977       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0819 12:23:07.793007       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 12:23:08.533776       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 12:23:09.033041       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 12:23:09.047133       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 12:23:09.096983       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 12:23:09.141247       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:23:09.154265       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 12:23:10.959631       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 12:23:11.846935       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a] <==
	W0819 12:23:01.439604       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.439608       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.446137       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.486779       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.488054       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.491523       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.492882       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.499274       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.592483       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.610084       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.658169       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.674776       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.686570       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.731004       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.760070       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.783001       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.799529       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.852563       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.871463       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:01.874317       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.024050       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.052479       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.185209       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.235126       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 12:23:02.240037       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59] <==
	I0819 12:22:47.458769       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-732494"
	I0819 12:22:47.458824       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 12:22:47.461295       1 shared_informer.go:320] Caches are synced for persistent volume
	I0819 12:22:47.463812       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0819 12:22:47.466743       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0819 12:22:47.469030       1 shared_informer.go:320] Caches are synced for PVC protection
	I0819 12:22:47.470312       1 shared_informer.go:320] Caches are synced for GC
	I0819 12:22:47.470394       1 shared_informer.go:320] Caches are synced for ephemeral
	I0819 12:22:47.470775       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0819 12:22:47.472153       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0819 12:22:47.472240       1 shared_informer.go:320] Caches are synced for crt configmap
	I0819 12:22:47.474788       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 12:22:47.478193       1 shared_informer.go:320] Caches are synced for endpoint
	I0819 12:22:47.487796       1 shared_informer.go:320] Caches are synced for deployment
	I0819 12:22:47.494772       1 shared_informer.go:320] Caches are synced for cronjob
	I0819 12:22:47.497064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="72.747322ms"
	I0819 12:22:47.497797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="49.361µs"
	I0819 12:22:47.569979       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0819 12:22:47.673127       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 12:22:47.674784       1 shared_informer.go:320] Caches are synced for stateful set
	I0819 12:22:47.679944       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:22:47.685748       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:22:48.084425       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 12:22:48.084532       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 12:22:48.114955       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [5e94100bd01e2a12a024ad475ac4e28b03a98c63cdbf5a27557ca11b485a43f1] <==
	I0819 12:23:10.949501       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-732494"
	I0819 12:23:10.949555       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 12:23:10.949744       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 12:23:10.950081       1 shared_informer.go:320] Caches are synced for job
	I0819 12:23:10.951691       1 shared_informer.go:320] Caches are synced for service account
	I0819 12:23:10.951894       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0819 12:23:10.960947       1 shared_informer.go:320] Caches are synced for namespace
	I0819 12:23:10.964403       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0819 12:23:10.967217       1 shared_informer.go:320] Caches are synced for stateful set
	I0819 12:23:10.968603       1 shared_informer.go:320] Caches are synced for TTL
	I0819 12:23:11.006370       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0819 12:23:11.008047       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0819 12:23:11.065352       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:23:11.124017       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 12:23:11.148746       1 shared_informer.go:320] Caches are synced for attach detach
	I0819 12:23:11.200039       1 shared_informer.go:320] Caches are synced for PV protection
	I0819 12:23:11.200107       1 shared_informer.go:320] Caches are synced for persistent volume
	I0819 12:23:11.585875       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 12:23:11.648223       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 12:23:11.648266       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 12:23:12.783953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="1.819467818s"
	I0819 12:23:12.784347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="86.664µs"
	I0819 12:23:13.982249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="416.745273ms"
	I0819 12:23:14.014575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="32.192362ms"
	I0819 12:23:14.014847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="79.695µs"
	
	
	==> kube-proxy [5472142059dc1e0d942de7e16bb4a4cd81635be4286bffaf5eacfbcf274c7600] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:23:08.197082       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 12:23:08.210454       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	E0819 12:23:08.210610       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:23:08.240350       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:23:08.240400       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:23:08.240445       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:23:08.242808       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:23:08.243052       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:23:08.243080       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:23:08.244797       1 config.go:197] "Starting service config controller"
	I0819 12:23:08.244821       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:23:08.244846       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:23:08.244851       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:23:08.245138       1 config.go:326] "Starting node config controller"
	I0819 12:23:08.245163       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:23:08.345894       1 shared_informer.go:320] Caches are synced for node config
	I0819 12:23:08.346010       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:23:08.346025       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612] <==
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:22:42.319913       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 12:22:44.170628       1 server.go:666] "Failed to retrieve node info" err="nodes \"pause-732494\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"
	I0819 12:22:45.331834       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	E0819 12:22:45.332031       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:22:45.364546       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 12:22:45.364654       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 12:22:45.364719       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:22:45.367337       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:22:45.367785       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:22:45.367893       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:22:45.369156       1 config.go:197] "Starting service config controller"
	I0819 12:22:45.369260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:22:45.369333       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:22:45.369381       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:22:45.369402       1 config.go:326] "Starting node config controller"
	I0819 12:22:45.369426       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:22:45.470261       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:22:45.470320       1 shared_informer.go:320] Caches are synced for service config
	I0819 12:22:45.470422       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [11fa5592e91c96d71da4281f3d06093b317902db21e9bbba4e780c30c423d362] <==
	I0819 12:23:05.359265       1 serving.go:386] Generated self-signed cert in-memory
	W0819 12:23:07.592212       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:23:07.592353       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:23:07.592432       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:23:07.592473       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:23:07.657648       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 12:23:07.657780       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:23:07.668771       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 12:23:07.669018       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 12:23:07.669896       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:23:07.669936       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:23:07.770938       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d] <==
	I0819 12:22:42.352367       1 serving.go:386] Generated self-signed cert in-memory
	W0819 12:22:44.108273       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:22:44.108319       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:22:44.108330       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:22:44.108336       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:22:44.164530       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 12:22:44.164598       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:22:44.172487       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 12:22:44.172539       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:22:44.176784       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 12:22:44.176855       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0819 12:22:44.193205       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:22:44.193296       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 12:22:45.773169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 12:22:51.815941       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.358974    3407 scope.go:117] "RemoveContainer" containerID="e3d093640f912a4953d7d7640925c0c37b84a18e2d96e841b2fcb9417f3f1c60"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.360782    3407 scope.go:117] "RemoveContainer" containerID="641b6edcbd83a540e5e005320007d44ab96f19ba227e084c263b8094d648669a"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.361106    3407 scope.go:117] "RemoveContainer" containerID="0125c48d70b7ffe290c4dd43414dc36ded6bf35818dc29ab7d8ac945fdec4a59"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.364081    3407 scope.go:117] "RemoveContainer" containerID="d046b6b14ce676d0afa11cb1cc91ccb4234a25da2d94e0fe3d2eec6f0af3b51d"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: E0819 12:23:04.365964    3407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-732494?timeout=10s\": dial tcp 192.168.39.24:8443: connect: connection refused" interval="800ms"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: I0819 12:23:04.576783    3407 kubelet_node_status.go:72] "Attempting to register node" node="pause-732494"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: E0819 12:23:04.578340    3407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.24:8443: connect: connection refused" node="pause-732494"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: W0819 12:23:04.647392    3407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.24:8443: connect: connection refused
	Aug 19 12:23:04 pause-732494 kubelet[3407]: E0819 12:23:04.647504    3407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError"
	Aug 19 12:23:04 pause-732494 kubelet[3407]: W0819 12:23:04.738139    3407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.39.24:8443: connect: connection refused
	Aug 19 12:23:04 pause-732494 kubelet[3407]: E0819 12:23:04.738223    3407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError"
	Aug 19 12:23:05 pause-732494 kubelet[3407]: I0819 12:23:05.379894    3407 kubelet_node_status.go:72] "Attempting to register node" node="pause-732494"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.712508    3407 kubelet_node_status.go:111] "Node was previously registered" node="pause-732494"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.712613    3407 kubelet_node_status.go:75] "Successfully registered node" node="pause-732494"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.712656    3407 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.713620    3407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.722733    3407 apiserver.go:52] "Watching apiserver"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.747044    3407 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.806584    3407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/970f3a70-eaec-4fa5-805b-d73e1d0b5bd5-lib-modules\") pod \"kube-proxy-4wpw2\" (UID: \"970f3a70-eaec-4fa5-805b-d73e1d0b5bd5\") " pod="kube-system/kube-proxy-4wpw2"
	Aug 19 12:23:07 pause-732494 kubelet[3407]: I0819 12:23:07.806688    3407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/970f3a70-eaec-4fa5-805b-d73e1d0b5bd5-xtables-lock\") pod \"kube-proxy-4wpw2\" (UID: \"970f3a70-eaec-4fa5-805b-d73e1d0b5bd5\") " pod="kube-system/kube-proxy-4wpw2"
	Aug 19 12:23:08 pause-732494 kubelet[3407]: I0819 12:23:08.029076    3407 scope.go:117] "RemoveContainer" containerID="c60e87585c18b8c957d1589206c0b6314660d2d53a3e5f15d6d2ba55239ce612"
	Aug 19 12:23:13 pause-732494 kubelet[3407]: E0819 12:23:13.878680    3407 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070193877396767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:23:13 pause-732494 kubelet[3407]: E0819 12:23:13.878757    3407 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070193877396767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:23:23 pause-732494 kubelet[3407]: E0819 12:23:23.882256    3407 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070203881635169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 12:23:23 pause-732494 kubelet[3407]: E0819 12:23:23.882967    3407 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724070203881635169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:23:29.596972  152730 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19476-99410/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-732494 -n pause-732494
helpers_test.go:261: (dbg) Run:  kubectl --context pause-732494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (69.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7200.062s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0819 12:43:13.949316  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/bridge-787042/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:43:35.347684  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:44:34.201954  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/auto-787042/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (26m22s)
	TestNetworkPlugins/group (16m17s)
	TestStartStop (23m41s)
	TestStartStop/group/default-k8s-diff-port (16m20s)
	TestStartStop/group/default-k8s-diff-port/serial (16m20s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (3m6s)
	TestStartStop/group/embed-certs (14m6s)
	TestStartStop/group/embed-certs/serial (14m6s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (1m52s)
	TestStartStop/group/no-preload (16m40s)
	TestStartStop/group/no-preload/serial (16m40s)
	TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (1m54s)
	TestStartStop/group/old-k8s-version (17m32s)
	TestStartStop/group/old-k8s-version/serial (17m32s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (10m44s)

                                                
                                                
goroutine 3926 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0004fad00, 0xc000b4dbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000012a08, {0x4e6cd20, 0x2b, 0x2b}, {0x292a70a?, 0xc000c26d80?, 0x4f2a700?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0002914a0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0002914a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000591d80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 3481 [syscall, 12 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x297ef, 0xc000b4aab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0017c6600)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0017c6600)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0018ca780)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0018ca780)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00137eb60, 0xc0018ca780)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x394c1d0, 0xc0004a93b0}, 0xc00137eb60, {0xc0018f1bd8, 0x16}, {0x0?, 0xc0012eff60?}, {0x551133?, 0x4a170f?}, {0xc00170ed80, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00137eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00137eb60, 0xc00155e280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2884
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 73 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 72
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2964 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2963
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1680 [chan receive, 27 minutes]:
testing.(*T).Run(0xc001eae000, {0x28cf1dc?, 0x55127c?}, 0xc001fe87f8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001eae000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001eae000, 0x33b8b18)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 732 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc0012cb200, 0xc00145f440)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 731
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2895 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00157e0c0, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2893
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2454 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc0014d6750, 0xc001403f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0xf3?, 0xc0014d6750, 0xc0014d6798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0xc00170c140?, 0xc00170c140?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001a62408?, 0x0?, 0xc0017a5060?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2472
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2108 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001340000, 0x33b8d40)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1778
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3284 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3283
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3602 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394c1d0, 0xc0005a3e30}, {0x393f4c0, 0xc001b085e0}, 0x1, 0x0, 0xc001349c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394c1d0?, 0xc0004664d0?}, 0x3b9aca00, 0xc001415e10?, 0x1, 0xc001415c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394c1d0, 0xc0004664d0}, 0xc001eafba0, {0xc001e1e690, 0x12}, {0x28f56a9, 0x14}, {0x290d448, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x394c1d0, 0xc0004664d0}, 0xc001eafba0, {0xc001e1e690, 0x12}, {0x28dc6b0?, 0xc001484760?}, {0x551133?, 0x4a170f?}, {0xc0014b0100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001eafba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001eafba0, 0xc0018b7a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3372
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2576 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2575
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 405 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000b55e50, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0002aad80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b55e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00083a4e0, {0x3925c60, 0xc0007a9530}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00083a4e0, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 384
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2992 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2991
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 611 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc001cf7200, 0xc001cdefc0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 610
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3010 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2986
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2455 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2454
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 213 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f8c3516c368, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00041a000)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00041a000)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000b56840)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000b56840)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0002305a0, {0x393edd0, 0xc000b56840})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0002305a0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x35343a3031203931?, 0xc0013401a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 210
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2660 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2659
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3577 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394c1d0, 0xc0004a9340}, {0x393f4c0, 0xc001d33540}, 0x1, 0x0, 0xc001345c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394c1d0?, 0xc0006305b0?}, 0x3b9aca00, 0xc001349e10?, 0x1, 0xc001349c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394c1d0, 0xc0006305b0}, 0xc001eaf860, {0xc001d282e8, 0x11}, {0x28f56a9, 0x14}, {0x290d448, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x394c1d0, 0xc0006305b0}, 0xc001eaf860, {0xc001d282e8, 0x11}, {0x28da466?, 0xc0014daf60?}, {0x551133?, 0x4a170f?}, {0xc0009fa700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001eaf860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001eaf860, 0xc0018b6900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3030
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2303 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2335
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2658 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00157f810, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00148cd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00157f840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001d50250, {0x3925c60, 0xc0007a8ed0}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001d50250, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2635
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 620 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc000209200, 0xc001b34540)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 336
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3587 [IO wait]:
internal/poll.runtime_pollWait(0x7f8c34794400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0018b7880?, 0xc00099b800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0018b7880, {0xc00099b800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0018b7880, {0xc00099b800?, 0xc000897a40?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000856470, {0xc00099b800?, 0xc00099b85f?, 0x6f?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc00143f110, {0xc00099b800?, 0x0?, 0xc00143f110?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc001958d30, {0x3926400, 0xc00143f110})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001958a88, {0x7f8c34799e58, 0xc001300678}, 0xc001360980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001958a88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001958a88, {0xc000601000, 0x1000, 0xc001c15dc0?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00184c7e0, {0xc001d54660, 0x9, 0x4e27c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x39248a0, 0xc00184c7e0}, {0xc001d54660, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001d54660, 0x9, 0x1360dc0?}, {0x39248a0?, 0xc00184c7e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001d54620)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001360fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00150d500)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3586
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 347 [chan send, 77 minutes]:
os/exec.(*Cmd).watchCtx(0xc0012cac00, 0xc0013912c0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 346
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 383 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 361
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2841 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000964080, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2836
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1826 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0005328c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc0004fb040, 0xc001fe87f8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1680
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2472 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000964480, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2470
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2852 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2851
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2659 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc001484750, 0xc001423f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0xe0?, 0xc001484750, 0xc001484798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0x9c7016?, 0xc0018ca480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc0018ca780?, 0xc00051cde0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2635
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 1778 [chan receive, 24 minutes]:
testing.(*T).Run(0xc001eaed00, {0x28cf1dc?, 0x551133?}, 0x33b8d40)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001eaed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001eaed00, 0x33b8b60)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2574 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001b9d210, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0012d0d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b9d240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000611190, {0x3925c60, 0xc001cdd290}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000611190, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2557
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 384 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b55e80, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 361
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3224 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001cda2d0, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001357d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001cda300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000842d70, {0x3925c60, 0xc001cdc000}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000842d70, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3177
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 407 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 406
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 406 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc000112750, 0xc000b45f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0x20?, 0xc000112750, 0xc000112798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0xc0004fb380?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc00083ef00?, 0xc000061f20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 384
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3011 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00157f380, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2986
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2963 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc0014d5f50, 0xc0014d5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0xf3?, 0xc0014d5f50, 0xc0014d5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0xc0004ba390?, 0xc0014d5fd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014d5fd0?, 0x592e44?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2895
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2356 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc0014d4f50, 0xc0014d4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0x0?, 0xc0014d4f50, 0xc0014d4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0xc00137e4e0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014d4fd0?, 0x592e44?, 0xc001a68090?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2304
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3226 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3225
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3030 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00137e820, {0x28fb460?, 0x60400000004?}, 0xc0018b6900)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00137e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00137e820, 0xc001cca200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2112
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 795 [select, 76 minutes]:
net/http.(*persistConn).readLoop(0xc0015e8ea0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 813
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2355 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00059b450, 0x14)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0002afd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00059b4c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b52c00, {0x3925c60, 0xc001440600}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b52c00, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2304
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3176 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3220
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2635 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00157f840, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2651
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 796 [select, 76 minutes]:
net/http.(*persistConn).writeLoop(0xc0015e8ea0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 813
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2556 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2555
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3051 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0004fb520, {0x28fb460?, 0x60400000004?}, 0xc000591180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0004fb520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0004fb520, 0xc000591400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2111
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2575 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc0014d7750, 0xc0000acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0x10?, 0xc0014d7750, 0xc0014d7798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0xc001eae820?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014d77d0?, 0x592e44?, 0xc00189a240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2557
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2851 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc0014da750, 0xc0002acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0x80?, 0xc0014da750, 0xc0014da798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0x9c7016?, 0xc0015c4480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc00170e900?, 0xc00051d080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2841
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2884 [chan receive, 12 minutes]:
testing.(*T).Run(0xc001eafa00, {0x28dc6c6?, 0x60400000004?}, 0xc00155e280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001eafa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001eafa00, 0xc0018b6e80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2109
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2557 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b9d240, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2555
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3177 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001cda300, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3220
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2258 [chan receive, 14 minutes]:
testing.(*T).Run(0xc001340ea0, {0x28d078c?, 0x0?}, 0xc00155e380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001340ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001340ea0, 0xc000b54480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2108
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3282 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00059b190, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00067fd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00059b1c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b4ed30, {0x3925c60, 0xc0021ac3f0}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b4ed30, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3253
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3225 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc0012f3f50, 0xc0012f3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0x40?, 0xc0012f3f50, 0xc0012f3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0x100000000000000?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012f3fd0?, 0x592e44?, 0xc0001fc180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3177
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2109 [chan receive, 18 minutes]:
testing.(*T).Run(0xc001340680, {0x28d078c?, 0x0?}, 0xc0018b6e80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001340680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001340680, 0xc000b54300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2108
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3372 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0013416c0, {0x28fb460?, 0x60400000004?}, 0xc0018b7a00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0013416c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0013416c0, 0xc00155e380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2258
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2112 [chan receive, 16 minutes]:
testing.(*T).Run(0xc001340b60, {0x28d078c?, 0x0?}, 0xc001cca200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001340b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001340b60, 0xc000b543c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2108
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2453 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000964450, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00141ed80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000964480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b52000, {0x3925c60, 0xc001af6030}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b52000, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2472
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3283 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc0014a0f50, 0xc0014a0f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0x80?, 0xc0014a0f50, 0xc0014a0f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0xc001eae820?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014a0fd0?, 0x592e44?, 0xc00051d980?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3253
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2357 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2356
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3253 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00059b1c0, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3266
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2111 [chan receive, 16 minutes]:
testing.(*T).Run(0xc0013409c0, {0x28d078c?, 0x0?}, 0xc000591400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013409c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013409c0, 0xc000b54380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2108
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2471 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2470
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2304 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00059b4c0, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2335
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2634 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2651
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3252 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3266
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2990 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00157f350, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0013b8d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00157f380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b4cba0, {0x3925c60, 0xc001ac22d0}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b4cba0, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3011
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2894 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2893
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2962 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00157e090, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001420d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00157e0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001702a70, {0x3925c60, 0xc001a697a0}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001702a70, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2895
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2991 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c390, 0xc000060300}, 0xc00149e750, 0xc0012cdf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c390, 0xc000060300}, 0x0?, 0xc00149e750, 0xc00149e798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c390?, 0xc000060300?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3011
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2840 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2836
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2850 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000964050, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001424d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3967000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000964080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012e2790, {0x3925c60, 0xc001fe6000}, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012e2790, 0x3b9aca00, 0x0, 0x1, 0xc000060300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2841
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3452 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394c1d0, 0xc0004a9d50}, {0x393f4c0, 0xc0006fb180}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394c1d0?, 0xc000174310?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394c1d0, 0xc000174310}, 0xc001eaeea0, {0xc001e01960, 0x1c}, {0x28f56a9, 0x14}, {0x290d448, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x394c1d0, 0xc000174310}, 0xc001eaeea0, {0xc001e01960, 0x1c}, {0x28f85f3?, 0xc0014d6760?}, {0x551133?, 0x4a170f?}, {0xc0014b0000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001eaeea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001eaeea0, 0xc000591180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3051
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3468 [IO wait]:
internal/poll.runtime_pollWait(0x7f8c3516bca0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000819180?, 0xc0013a6000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000819180, {0xc0013a6000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000819180, {0xc0013a6000?, 0xc0004692c0?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001936300, {0xc0013a6000?, 0xc0013a605f?, 0x70?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001fe9278, {0xc0013a6000?, 0x0?, 0xc001fe9278?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc001958630, {0x3926400, 0xc001fe9278})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001958388, {0x7f8c34799e58, 0xc001c16318}, 0xc0013b2980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001958388, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001958388, {0xc0005a6000, 0x1000, 0xc001c15c00?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001ab41e0, {0xc001d54200, 0x9, 0x4e27c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x39248a0, 0xc001ab41e0}, {0xc001d54200, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001d54200, 0x9, 0x13b2dc0?}, {0x39248a0?, 0xc001ab41e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001d541c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0013b2fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0018ca480)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3467
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3482 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f8c3516b9b8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001bdb4a0?, 0xc001aafb71?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001bdb4a0, {0xc001aafb71, 0x48f, 0x48f})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001936448, {0xc001aafb71?, 0x239e2c0?, 0x208?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021ac900, {0x39246c0, 0xc0008565c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3924800, 0xc0021ac900}, {0x39246c0, 0xc0008565c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001936448?, {0x3924800, 0xc0021ac900})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001936448, {0x3924800, 0xc0021ac900})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3924800, 0xc0021ac900}, {0x3924720, 0xc001936448}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00155e280?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3481
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3483 [IO wait]:
internal/poll.runtime_pollWait(0x7f8c347945f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001bdb560?, 0xc001f4b7e8?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001bdb560, {0xc001f4b7e8, 0x14818, 0x14818})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001936478, {0xc001f4b7e8?, 0x239e2c0?, 0x3fe33?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021ac930, {0x39246c0, 0xc001306228})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3924800, 0xc0021ac930}, {0x39246c0, 0xc001306228}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001936478?, {0x3924800, 0xc0021ac930})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001936478, {0x3924800, 0xc0021ac930})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3924800, 0xc0021ac930}, {0x3924720, 0xc001936478}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000590900?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3481
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3484 [select, 12 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018ca780, 0xc00145ee40)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3481
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3592 [IO wait]:
internal/poll.runtime_pollWait(0x7f8c3516c650, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000591880?, 0xc0012f6000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000591880, {0xc0012f6000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000591880, {0xc0012f6000?, 0x7f8c347b1730?, 0xc00143f008?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000856510, {0xc0012f6000?, 0xc001362938?, 0x41469b?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc00143f008, {0xc0012f6000?, 0x0?, 0xc00143f008?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0019590b0, {0x3926400, 0xc00143f008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001958e08, {0x39257a0, 0xc000856510}, 0xc001362980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001958e08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001958e08, {0xc0012fb000, 0x1000, 0xc001c15dc0?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0013377a0, {0xc001d54900, 0x9, 0x4e27c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x39248a0, 0xc0013377a0}, {0xc001d54900, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001d54900, 0x9, 0x1362dc0?}, {0x39248a0?, 0xc0013377a0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001d548c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001362fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00150d800)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3591
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:865 +0xcfb

                                                
                                    

Test pass (170/208)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.09
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 11.34
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.14
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 77.84
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 60.43
29 TestCertExpiration 307.16
31 TestForceSystemdFlag 87.01
32 TestForceSystemdEnv 44.32
34 TestKVMDriverInstallOrUpdate 3.71
38 TestErrorSpam/setup 39.04
39 TestErrorSpam/start 0.35
40 TestErrorSpam/status 0.73
41 TestErrorSpam/pause 1.52
42 TestErrorSpam/unpause 1.66
43 TestErrorSpam/stop 4
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 61.57
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 34.11
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.07
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
55 TestFunctional/serial/CacheCmd/cache/add_local 2.02
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
60 TestFunctional/serial/CacheCmd/cache/delete 0.1
61 TestFunctional/serial/MinikubeKubectlCmd 0.11
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 32.4
64 TestFunctional/serial/ComponentHealth 0.07
65 TestFunctional/serial/LogsCmd 1.37
66 TestFunctional/serial/LogsFileCmd 1.38
67 TestFunctional/serial/InvalidService 5.71
69 TestFunctional/parallel/ConfigCmd 0.35
70 TestFunctional/parallel/DashboardCmd 15.25
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.14
73 TestFunctional/parallel/StatusCmd 0.9
77 TestFunctional/parallel/ServiceCmdConnect 13.61
78 TestFunctional/parallel/AddonsCmd 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 34.4
81 TestFunctional/parallel/SSHCmd 0.45
82 TestFunctional/parallel/CpCmd 1.3
83 TestFunctional/parallel/MySQL 25.32
84 TestFunctional/parallel/FileSync 0.22
85 TestFunctional/parallel/CertSync 1.25
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
93 TestFunctional/parallel/License 0.2
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
100 TestFunctional/parallel/ImageCommands/ImageBuild 4.13
101 TestFunctional/parallel/ImageCommands/Setup 1.58
102 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.2
105 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.44
106 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.47
107 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.98
108 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
109 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
110 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
111 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
112 TestFunctional/parallel/ServiceCmd/DeployApp 7.17
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
119 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
120 TestFunctional/parallel/ProfileCmd/profile_list 0.27
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
122 TestFunctional/parallel/Version/short 0.05
123 TestFunctional/parallel/Version/components 0.61
124 TestFunctional/parallel/MountCmd/any-port 10.11
125 TestFunctional/parallel/ServiceCmd/List 1.22
126 TestFunctional/parallel/ServiceCmd/JSONOutput 1.23
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
128 TestFunctional/parallel/ServiceCmd/Format 0.37
129 TestFunctional/parallel/ServiceCmd/URL 0.33
130 TestFunctional/parallel/MountCmd/specific-port 1.76
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.3
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
135 TestFunctional/delete_echo-server_images 0.03
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
141 TestMultiControlPlane/serial/StartCluster 190.65
142 TestMultiControlPlane/serial/DeployApp 5.85
143 TestMultiControlPlane/serial/PingHostFromPods 1.22
144 TestMultiControlPlane/serial/AddWorkerNode 54.55
145 TestMultiControlPlane/serial/NodeLabels 0.07
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
147 TestMultiControlPlane/serial/CopyFile 12.84
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
153 TestMultiControlPlane/serial/DeleteSecondaryNode 16.57
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
156 TestMultiControlPlane/serial/RestartCluster 347.73
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.42
158 TestMultiControlPlane/serial/AddSecondaryNode 80.22
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
163 TestJSONOutput/start/Command 56
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 0.65
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.6
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 6.67
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.2
191 TestMainNoArgs 0.05
192 TestMinikubeProfile 90.6
195 TestMountStart/serial/StartWithMountFirst 27.15
196 TestMountStart/serial/VerifyMountFirst 0.38
197 TestMountStart/serial/StartWithMountSecond 27.86
198 TestMountStart/serial/VerifyMountSecond 0.37
199 TestMountStart/serial/DeleteFirst 0.89
200 TestMountStart/serial/VerifyMountPostDelete 0.37
201 TestMountStart/serial/Stop 1.27
202 TestMountStart/serial/RestartStopped 23.41
203 TestMountStart/serial/VerifyMountPostStop 0.38
206 TestMultiNode/serial/FreshStart2Nodes 109.56
207 TestMultiNode/serial/DeployApp2Nodes 4.67
208 TestMultiNode/serial/PingHostFrom2Pods 0.79
209 TestMultiNode/serial/AddNode 49.62
210 TestMultiNode/serial/MultiNodeLabels 0.06
211 TestMultiNode/serial/ProfileList 0.21
212 TestMultiNode/serial/CopyFile 7.25
213 TestMultiNode/serial/StopNode 2.24
214 TestMultiNode/serial/StartAfterStop 39.13
216 TestMultiNode/serial/DeleteNode 2.12
218 TestMultiNode/serial/RestartMultiNode 187.37
219 TestMultiNode/serial/ValidateNameConflict 40.34
226 TestScheduledStopUnix 114.83
230 TestRunningBinaryUpgrade 197.22
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
236 TestNoKubernetes/serial/StartWithK8s 93.36
248 TestNoKubernetes/serial/StartWithStopK8s 67.78
249 TestNoKubernetes/serial/Start 53.95
251 TestPause/serial/Start 70.94
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
260 TestNoKubernetes/serial/ProfileList 0.82
261 TestNoKubernetes/serial/Stop 1.29
262 TestNoKubernetes/serial/StartNoArgs 62.68
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
265 TestStoppedBinaryUpgrade/Setup 0.45
266 TestStoppedBinaryUpgrade/Upgrade 134.53
274 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
x
+
TestDownloadOnly/v1.20.0/json-events (9.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-718795 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-718795 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.090403516s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-718795
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-718795: exit status 85 (64.664564ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-718795 | jenkins | v1.33.1 | 19 Aug 24 10:44 UTC |          |
	|         | -p download-only-718795        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:44:59
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:44:59.344742  106644 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:44:59.345036  106644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:44:59.345047  106644 out.go:358] Setting ErrFile to fd 2...
	I0819 10:44:59.345054  106644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:44:59.345242  106644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	W0819 10:44:59.345412  106644 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19476-99410/.minikube/config/config.json: open /home/jenkins/minikube-integration/19476-99410/.minikube/config/config.json: no such file or directory
	I0819 10:44:59.346094  106644 out.go:352] Setting JSON to true
	I0819 10:44:59.347069  106644 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1645,"bootTime":1724062654,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 10:44:59.347143  106644 start.go:139] virtualization: kvm guest
	I0819 10:44:59.349922  106644 out.go:97] [download-only-718795] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0819 10:44:59.350070  106644 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 10:44:59.350150  106644 notify.go:220] Checking for updates...
	I0819 10:44:59.351529  106644 out.go:169] MINIKUBE_LOCATION=19476
	I0819 10:44:59.353083  106644 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:44:59.354427  106644 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 10:44:59.355917  106644 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 10:44:59.357251  106644 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 10:44:59.359904  106644 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 10:44:59.360140  106644 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:44:59.466353  106644 out.go:97] Using the kvm2 driver based on user configuration
	I0819 10:44:59.466401  106644 start.go:297] selected driver: kvm2
	I0819 10:44:59.466417  106644 start.go:901] validating driver "kvm2" against <nil>
	I0819 10:44:59.466828  106644 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:44:59.466971  106644 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 10:44:59.483434  106644 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 10:44:59.483500  106644 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:44:59.484059  106644 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 10:44:59.484240  106644 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 10:44:59.484312  106644 cni.go:84] Creating CNI manager for ""
	I0819 10:44:59.484325  106644 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 10:44:59.484334  106644 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 10:44:59.484387  106644 start.go:340] cluster config:
	{Name:download-only-718795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-718795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:44:59.484575  106644 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:44:59.486359  106644 out.go:97] Downloading VM boot image ...
	I0819 10:44:59.486411  106644 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19476-99410/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 10:45:02.161923  106644 out.go:97] Starting "download-only-718795" primary control-plane node in "download-only-718795" cluster
	I0819 10:45:02.161968  106644 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 10:45:02.185350  106644 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:45:02.185380  106644 cache.go:56] Caching tarball of preloaded images
	I0819 10:45:02.185549  106644 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 10:45:02.187425  106644 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 10:45:02.187458  106644 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 10:45:02.220297  106644 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-718795 host does not exist
	  To start a cluster, run: "minikube start -p download-only-718795"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-718795
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (11.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-768344 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-768344 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.344237772s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (11.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-768344
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-768344: exit status 85 (60.988354ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-718795 | jenkins | v1.33.1 | 19 Aug 24 10:44 UTC |                     |
	|         | -p download-only-718795        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 10:45 UTC | 19 Aug 24 10:45 UTC |
	| delete  | -p download-only-718795        | download-only-718795 | jenkins | v1.33.1 | 19 Aug 24 10:45 UTC | 19 Aug 24 10:45 UTC |
	| start   | -o=json --download-only        | download-only-768344 | jenkins | v1.33.1 | 19 Aug 24 10:45 UTC |                     |
	|         | -p download-only-768344        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:45:08
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:45:08.778221  106851 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:45:08.778536  106851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:08.778546  106851 out.go:358] Setting ErrFile to fd 2...
	I0819 10:45:08.778551  106851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:08.778779  106851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 10:45:08.779427  106851 out.go:352] Setting JSON to true
	I0819 10:45:08.780467  106851 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1655,"bootTime":1724062654,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 10:45:08.780538  106851 start.go:139] virtualization: kvm guest
	I0819 10:45:08.782610  106851 out.go:97] [download-only-768344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 10:45:08.782862  106851 notify.go:220] Checking for updates...
	I0819 10:45:08.784067  106851 out.go:169] MINIKUBE_LOCATION=19476
	I0819 10:45:08.785533  106851 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:45:08.787134  106851 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 10:45:08.788772  106851 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 10:45:08.790231  106851 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 10:45:08.792743  106851 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 10:45:08.792982  106851 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:45:08.827857  106851 out.go:97] Using the kvm2 driver based on user configuration
	I0819 10:45:08.827896  106851 start.go:297] selected driver: kvm2
	I0819 10:45:08.827916  106851 start.go:901] validating driver "kvm2" against <nil>
	I0819 10:45:08.828305  106851 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:45:08.828690  106851 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19476-99410/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 10:45:08.845713  106851 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 10:45:08.845800  106851 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:45:08.846345  106851 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 10:45:08.846517  106851 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 10:45:08.846596  106851 cni.go:84] Creating CNI manager for ""
	I0819 10:45:08.846613  106851 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 10:45:08.846626  106851 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 10:45:08.846698  106851 start.go:340] cluster config:
	{Name:download-only-768344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-768344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:45:08.846823  106851 iso.go:125] acquiring lock: {Name:mkf3e460532d8bd3cc1352bbf698537d79a481a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:45:08.848650  106851 out.go:97] Starting "download-only-768344" primary control-plane node in "download-only-768344" cluster
	I0819 10:45:08.848675  106851 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:45:08.910253  106851 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:45:08.910287  106851 cache.go:56] Caching tarball of preloaded images
	I0819 10:45:08.910454  106851 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 10:45:08.912316  106851 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 10:45:08.912342  106851 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 10:45:08.935120  106851 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 10:45:18.523102  106851 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 10:45:18.523204  106851 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19476-99410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-768344 host does not exist
	  To start a cluster, run: "minikube start -p download-only-768344"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-768344
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-709318 --alsologtostderr --binary-mirror http://127.0.0.1:34665 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-709318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-709318
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (77.84s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-320395 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-320395 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.706739553s)
helpers_test.go:175: Cleaning up "offline-crio-320395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-320395
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-320395: (1.134121987s)
--- PASS: TestOffline (77.84s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-479471
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-479471: exit status 85 (55.657646ms)

                                                
                                                
-- stdout --
	* Profile "addons-479471" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-479471"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-479471
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-479471: exit status 85 (54.454496ms)

                                                
                                                
-- stdout --
	* Profile "addons-479471" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-479471"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (60.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-294561 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-294561 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (58.997237058s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-294561 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-294561 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-294561 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-294561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-294561
--- PASS: TestCertOptions (60.43s)

                                                
                                    
x
+
TestCertExpiration (307.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-497658 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-497658 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m30.549537514s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-497658 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-497658 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (35.529726094s)
helpers_test.go:175: Cleaning up "cert-expiration-497658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-497658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-497658: (1.079999536s)
--- PASS: TestCertExpiration (307.16s)

                                                
                                    
x
+
TestForceSystemdFlag (87.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-557690 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-557690 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m25.811483963s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-557690 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-557690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-557690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-557690: (1.003296471s)
--- PASS: TestForceSystemdFlag (87.01s)

                                                
                                    
x
+
TestForceSystemdEnv (44.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-356192 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-356192 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.336200672s)
helpers_test.go:175: Cleaning up "force-systemd-env-356192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-356192
--- PASS: TestForceSystemdEnv (44.32s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.71s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.71s)

                                                
                                    
x
+
TestErrorSpam/setup (39.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-456816 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-456816 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-456816 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-456816 --driver=kvm2  --container-runtime=crio: (39.041678694s)
--- PASS: TestErrorSpam/setup (39.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 stop: (1.559702368s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 stop: (1.12608141s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-456816 --log_dir /tmp/nospam-456816 stop: (1.308796511s)
--- PASS: TestErrorSpam/stop (4.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19476-99410/.minikube/files/etc/test/nested/copy/106632/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-881155 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-881155 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.566526687s)
--- PASS: TestFunctional/serial/StartWithProxy (61.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-881155 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-881155 --alsologtostderr -v=8: (34.107757528s)
functional_test.go:663: soft start took 34.108652864s for "functional-881155" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-881155 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 cache add registry.k8s.io/pause:3.1: (1.139716339s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 cache add registry.k8s.io/pause:3.3: (1.303847659s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 cache add registry.k8s.io/pause:latest: (1.130266778s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-881155 /tmp/TestFunctionalserialCacheCmdcacheadd_local2989259807/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cache add minikube-local-cache-test:functional-881155
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 cache add minikube-local-cache-test:functional-881155: (1.681986405s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cache delete minikube-local-cache-test:functional-881155
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-881155
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.368626ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 cache reload: (1.066497395s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 kubectl -- --context functional-881155 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-881155 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-881155 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-881155 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.396813356s)
functional_test.go:761: restart took 32.39696068s for "functional-881155" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-881155 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 logs: (1.368434894s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 logs --file /tmp/TestFunctionalserialLogsFileCmd4081181402/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 logs --file /tmp/TestFunctionalserialLogsFileCmd4081181402/001/logs.txt: (1.375547174s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-881155 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-881155
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-881155: exit status 115 (278.986396ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.35:31862 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-881155 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-881155 delete -f testdata/invalidsvc.yaml: (2.227205059s)
--- PASS: TestFunctional/serial/InvalidService (5.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 config get cpus: exit status 14 (52.815519ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 config get cpus: exit status 14 (56.654134ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-881155 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-881155 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 119824: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-881155 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-881155 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.698729ms)

                                                
                                                
-- stdout --
	* [functional-881155] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:28:49.850020  119690 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:28:49.850153  119690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:49.850164  119690 out.go:358] Setting ErrFile to fd 2...
	I0819 11:28:49.850171  119690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:49.850467  119690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:28:49.851178  119690 out.go:352] Setting JSON to false
	I0819 11:28:49.852449  119690 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4276,"bootTime":1724062654,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:28:49.852516  119690 start.go:139] virtualization: kvm guest
	I0819 11:28:49.854605  119690 out.go:177] * [functional-881155] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 11:28:49.856259  119690 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:28:49.856295  119690 notify.go:220] Checking for updates...
	I0819 11:28:49.859354  119690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:28:49.860887  119690 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:28:49.862181  119690 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:28:49.863651  119690 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:28:49.864986  119690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:28:49.866794  119690 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:28:49.867397  119690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:28:49.867483  119690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:28:49.883319  119690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0819 11:28:49.883823  119690 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:28:49.884419  119690 main.go:141] libmachine: Using API Version  1
	I0819 11:28:49.884470  119690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:28:49.884856  119690 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:28:49.885061  119690 main.go:141] libmachine: (functional-881155) Calling .DriverName
	I0819 11:28:49.885328  119690 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:28:49.885631  119690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:28:49.885665  119690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:28:49.901411  119690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34565
	I0819 11:28:49.901892  119690 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:28:49.902419  119690 main.go:141] libmachine: Using API Version  1
	I0819 11:28:49.902448  119690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:28:49.902830  119690 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:28:49.903021  119690 main.go:141] libmachine: (functional-881155) Calling .DriverName
	I0819 11:28:49.938250  119690 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 11:28:49.939698  119690 start.go:297] selected driver: kvm2
	I0819 11:28:49.939758  119690 start.go:901] validating driver "kvm2" against &{Name:functional-881155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-881155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:28:49.939874  119690 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:28:49.941980  119690 out.go:201] 
	W0819 11:28:49.943259  119690 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 11:28:49.944411  119690 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-881155 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-881155 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-881155 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.203181ms)

                                                
                                                
-- stdout --
	* [functional-881155] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:28:50.129899  119745 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:28:50.130023  119745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:50.130031  119745 out.go:358] Setting ErrFile to fd 2...
	I0819 11:28:50.130036  119745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:50.130297  119745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 11:28:50.130845  119745 out.go:352] Setting JSON to false
	I0819 11:28:50.131799  119745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4276,"bootTime":1724062654,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 11:28:50.131864  119745 start.go:139] virtualization: kvm guest
	I0819 11:28:50.133882  119745 out.go:177] * [functional-881155] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0819 11:28:50.135143  119745 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:28:50.135209  119745 notify.go:220] Checking for updates...
	I0819 11:28:50.137644  119745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:28:50.139096  119745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	I0819 11:28:50.140536  119745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	I0819 11:28:50.141743  119745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 11:28:50.143100  119745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:28:50.144854  119745 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 11:28:50.145275  119745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:28:50.145339  119745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:28:50.161151  119745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0819 11:28:50.161694  119745 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:28:50.162289  119745 main.go:141] libmachine: Using API Version  1
	I0819 11:28:50.162316  119745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:28:50.162769  119745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:28:50.162963  119745 main.go:141] libmachine: (functional-881155) Calling .DriverName
	I0819 11:28:50.163247  119745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:28:50.163552  119745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 11:28:50.163588  119745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 11:28:50.179275  119745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36135
	I0819 11:28:50.179765  119745 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:28:50.180219  119745 main.go:141] libmachine: Using API Version  1
	I0819 11:28:50.180246  119745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:28:50.180566  119745 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:28:50.180781  119745 main.go:141] libmachine: (functional-881155) Calling .DriverName
	I0819 11:28:50.215522  119745 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0819 11:28:50.216849  119745 start.go:297] selected driver: kvm2
	I0819 11:28:50.216882  119745 start.go:901] validating driver "kvm2" against &{Name:functional-881155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-881155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:28:50.217063  119745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:28:50.219821  119745 out.go:201] 
	W0819 11:28:50.221213  119745 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 11:28:50.222576  119745 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-881155 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-881155 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6k9kq" [e075bc3d-5ffa-4019-9817-7b9e4736ebf2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6k9kq" [e075bc3d-5ffa-4019-9817-7b9e4736ebf2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.005207049s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.35:31800
functional_test.go:1675: http://192.168.39.35:31800: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-6k9kq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.35:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.35:31800
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [db2033ca-3dcb-4c26-8209-2c075998b82e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003788236s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-881155 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-881155 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-881155 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-881155 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-881155 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [402dab80-5ebe-4e20-a538-95c6cd006d0e] Pending
helpers_test.go:344: "sp-pod" [402dab80-5ebe-4e20-a538-95c6cd006d0e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [402dab80-5ebe-4e20-a538-95c6cd006d0e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005800915s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-881155 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-881155 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-881155 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0e79ab41-a77d-414e-a6a8-84b524d3ad6a] Pending
helpers_test.go:344: "sp-pod" [0e79ab41-a77d-414e-a6a8-84b524d3ad6a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0e79ab41-a77d-414e-a6a8-84b524d3ad6a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003646326s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-881155 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh -n functional-881155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cp functional-881155:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd919785630/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh -n functional-881155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh -n functional-881155 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-881155 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-bzhnp" [12d91576-fa21-4d8c-8207-f42840464dfd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-bzhnp" [12d91576-fa21-4d8c-8207-f42840464dfd] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.003816363s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-881155 exec mysql-6cdb49bbb-bzhnp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-881155 exec mysql-6cdb49bbb-bzhnp -- mysql -ppassword -e "show databases;": exit status 1 (127.270739ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-881155 exec mysql-6cdb49bbb-bzhnp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/106632/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo cat /etc/test/nested/copy/106632/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/106632.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo cat /etc/ssl/certs/106632.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/106632.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo cat /usr/share/ca-certificates/106632.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/1066322.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo cat /etc/ssl/certs/1066322.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/1066322.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo cat /usr/share/ca-certificates/1066322.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-881155 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 ssh "sudo systemctl is-active docker": exit status 1 (241.051769ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 ssh "sudo systemctl is-active containerd": exit status 1 (240.923062ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-881155 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-881155 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-881155 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-881155 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 118575: os: process already finished
helpers_test.go:502: unable to terminate pid 118583: os: process already finished
helpers_test.go:502: unable to terminate pid 118662: os: process already finished
helpers_test.go:508: unable to kill pid 118554: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-881155 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-881155
localhost/kicbase/echo-server:functional-881155
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-881155 image ls --format short --alsologtostderr:
I0819 11:29:00.559327  120791 out.go:345] Setting OutFile to fd 1 ...
I0819 11:29:00.559481  120791 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:00.559612  120791 out.go:358] Setting ErrFile to fd 2...
I0819 11:29:00.559629  120791 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:00.559935  120791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
I0819 11:29:00.560704  120791 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:00.560817  120791 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:00.561231  120791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:00.561284  120791 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:00.577519  120791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
I0819 11:29:00.578098  120791 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:00.578728  120791 main.go:141] libmachine: Using API Version  1
I0819 11:29:00.578755  120791 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:00.579099  120791 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:00.579310  120791 main.go:141] libmachine: (functional-881155) Calling .GetState
I0819 11:29:00.581306  120791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:00.581355  120791 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:00.597778  120791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34401
I0819 11:29:00.598295  120791 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:00.598874  120791 main.go:141] libmachine: Using API Version  1
I0819 11:29:00.598905  120791 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:00.599235  120791 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:00.599402  120791 main.go:141] libmachine: (functional-881155) Calling .DriverName
I0819 11:29:00.599630  120791 ssh_runner.go:195] Run: systemctl --version
I0819 11:29:00.599669  120791 main.go:141] libmachine: (functional-881155) Calling .GetSSHHostname
I0819 11:29:00.602789  120791 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:00.603229  120791 main.go:141] libmachine: (functional-881155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:6f:8b", ip: ""} in network mk-functional-881155: {Iface:virbr1 ExpiryTime:2024-08-19 12:26:24 +0000 UTC Type:0 Mac:52:54:00:23:6f:8b Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:functional-881155 Clientid:01:52:54:00:23:6f:8b}
I0819 11:29:00.603262  120791 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined IP address 192.168.39.35 and MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:00.603429  120791 main.go:141] libmachine: (functional-881155) Calling .GetSSHPort
I0819 11:29:00.603634  120791 main.go:141] libmachine: (functional-881155) Calling .GetSSHKeyPath
I0819 11:29:00.603826  120791 main.go:141] libmachine: (functional-881155) Calling .GetSSHUsername
I0819 11:29:00.604009  120791 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/functional-881155/id_rsa Username:docker}
I0819 11:29:00.731004  120791 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 11:29:00.787049  120791 main.go:141] libmachine: Making call to close driver server
I0819 11:29:00.787069  120791 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:00.787392  120791 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:00.787416  120791 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 11:29:00.787433  120791 main.go:141] libmachine: Making call to close driver server
I0819 11:29:00.787443  120791 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:00.787659  120791 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:00.787671  120791 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-881155 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/my-image                      | functional-881155  | 7025bdb643559 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-881155  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-881155  | b54ce45d2a868 | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| docker.io/library/nginx                 | alpine             | 0f0eda053dc5c | 44.7MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-881155 image ls --format table --alsologtostderr:
I0819 11:29:05.237519  120930 out.go:345] Setting OutFile to fd 1 ...
I0819 11:29:05.237647  120930 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:05.237655  120930 out.go:358] Setting ErrFile to fd 2...
I0819 11:29:05.237660  120930 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:05.237832  120930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
I0819 11:29:05.238456  120930 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:05.238561  120930 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:05.238932  120930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:05.238983  120930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:05.254356  120930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
I0819 11:29:05.254879  120930 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:05.255422  120930 main.go:141] libmachine: Using API Version  1
I0819 11:29:05.255453  120930 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:05.255808  120930 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:05.256012  120930 main.go:141] libmachine: (functional-881155) Calling .GetState
I0819 11:29:05.257808  120930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:05.257850  120930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:05.274047  120930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
I0819 11:29:05.274450  120930 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:05.275012  120930 main.go:141] libmachine: Using API Version  1
I0819 11:29:05.275041  120930 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:05.275363  120930 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:05.275566  120930 main.go:141] libmachine: (functional-881155) Calling .DriverName
I0819 11:29:05.275796  120930 ssh_runner.go:195] Run: systemctl --version
I0819 11:29:05.275824  120930 main.go:141] libmachine: (functional-881155) Calling .GetSSHHostname
I0819 11:29:05.278718  120930 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:05.279092  120930 main.go:141] libmachine: (functional-881155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:6f:8b", ip: ""} in network mk-functional-881155: {Iface:virbr1 ExpiryTime:2024-08-19 12:26:24 +0000 UTC Type:0 Mac:52:54:00:23:6f:8b Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:functional-881155 Clientid:01:52:54:00:23:6f:8b}
I0819 11:29:05.279131  120930 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined IP address 192.168.39.35 and MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:05.279265  120930 main.go:141] libmachine: (functional-881155) Calling .GetSSHPort
I0819 11:29:05.279467  120930 main.go:141] libmachine: (functional-881155) Calling .GetSSHKeyPath
I0819 11:29:05.279591  120930 main.go:141] libmachine: (functional-881155) Calling .GetSSHUsername
I0819 11:29:05.279756  120930 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/functional-881155/id_rsa Username:docker}
I0819 11:29:05.362151  120930 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 11:29:05.407652  120930 main.go:141] libmachine: Making call to close driver server
I0819 11:29:05.407669  120930 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:05.407960  120930 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:05.407999  120930 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 11:29:05.408008  120930 main.go:141] libmachine: (functional-881155) DBG | Closing plugin on server side
I0819 11:29:05.408022  120930 main.go:141] libmachine: Making call to close driver server
I0819 11:29:05.408032  120930 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:05.408235  120930 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:05.408250  120930 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-881155 image ls --format json --alsologtostderr:
[{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"b54ce45d2a868516bee2c40ae4c8e24219d152e5f1f8d44af9decf5642ba2088","repoDigests":["localhost/minikube-local-cache-test@sha256:a63ba38a845986ee884c5628fbd0f2ce391727c1e779b6cfa8dc18ad630e6bba"],"repoTags":["localhost/minikube-local-cache-test:functional-881155"],"size":"3330"},
{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f
505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"51d65778241a9ee5e295e00138aa7f200a9d7b04d50ee1e1e378dbd699e02ac5","repoDigests":["docker.io/library/51e4736d9a5ddbfd9f59cfcf2006481813ab61eb224a381322201f10f8b72690-tmp@sha256:69ed90fba31c9c1d171f8b9d4be5e3d3398d3c029df02f73a732ffbbab86b61c"],"r
epoTags":[],"size":"1466018"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"7025bdb643559790b1126dbd8a5c71c33ad583971f2324c5e9596f03b7443627","repoDigests":["localhost/my-image@sha256:406179c9955424f5b552685e3b42bb12982f42981fa90054e28a22f0f60f0124"],"repoTags":["localhost/my-image:functional-881155"],"size":"1468600"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"0184c1613d92
931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-881155"],"size":"4943877"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92
dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":["docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44668625"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-m
inikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7
f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-881155 image ls --format json --alsologtostderr:
I0819 11:29:05.457837  120954 out.go:345] Setting OutFile to fd 1 ...
I0819 11:29:05.457936  120954 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:05.457944  120954 out.go:358] Setting ErrFile to fd 2...
I0819 11:29:05.457948  120954 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:05.458130  120954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
I0819 11:29:05.458664  120954 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:05.458767  120954 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:05.459141  120954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:05.459187  120954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:05.475285  120954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
I0819 11:29:05.475795  120954 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:05.476416  120954 main.go:141] libmachine: Using API Version  1
I0819 11:29:05.476443  120954 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:05.476776  120954 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:05.476964  120954 main.go:141] libmachine: (functional-881155) Calling .GetState
I0819 11:29:05.478991  120954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:05.479040  120954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:05.497084  120954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
I0819 11:29:05.497642  120954 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:05.498259  120954 main.go:141] libmachine: Using API Version  1
I0819 11:29:05.498286  120954 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:05.498686  120954 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:05.498918  120954 main.go:141] libmachine: (functional-881155) Calling .DriverName
I0819 11:29:05.499179  120954 ssh_runner.go:195] Run: systemctl --version
I0819 11:29:05.499217  120954 main.go:141] libmachine: (functional-881155) Calling .GetSSHHostname
I0819 11:29:05.502243  120954 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:05.502554  120954 main.go:141] libmachine: (functional-881155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:6f:8b", ip: ""} in network mk-functional-881155: {Iface:virbr1 ExpiryTime:2024-08-19 12:26:24 +0000 UTC Type:0 Mac:52:54:00:23:6f:8b Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:functional-881155 Clientid:01:52:54:00:23:6f:8b}
I0819 11:29:05.502582  120954 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined IP address 192.168.39.35 and MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:05.502790  120954 main.go:141] libmachine: (functional-881155) Calling .GetSSHPort
I0819 11:29:05.502995  120954 main.go:141] libmachine: (functional-881155) Calling .GetSSHKeyPath
I0819 11:29:05.503111  120954 main.go:141] libmachine: (functional-881155) Calling .GetSSHUsername
I0819 11:29:05.503246  120954 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/functional-881155/id_rsa Username:docker}
I0819 11:29:05.596415  120954 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 11:29:05.658915  120954 main.go:141] libmachine: Making call to close driver server
I0819 11:29:05.658933  120954 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:05.659252  120954 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:05.659269  120954 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 11:29:05.659283  120954 main.go:141] libmachine: Making call to close driver server
I0819 11:29:05.659291  120954 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:05.659511  120954 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:05.659524  120954 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-881155 image ls --format yaml --alsologtostderr:
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests:
- docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "44668625"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-881155
size: "4943877"
- id: b54ce45d2a868516bee2c40ae4c8e24219d152e5f1f8d44af9decf5642ba2088
repoDigests:
- localhost/minikube-local-cache-test@sha256:a63ba38a845986ee884c5628fbd0f2ce391727c1e779b6cfa8dc18ad630e6bba
repoTags:
- localhost/minikube-local-cache-test:functional-881155
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-881155 image ls --format yaml --alsologtostderr:
I0819 11:29:00.850082  120830 out.go:345] Setting OutFile to fd 1 ...
I0819 11:29:00.850230  120830 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:00.850240  120830 out.go:358] Setting ErrFile to fd 2...
I0819 11:29:00.850247  120830 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:00.850531  120830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
I0819 11:29:00.851341  120830 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:00.851487  120830 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:00.852106  120830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:00.852165  120830 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:00.867864  120830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
I0819 11:29:00.868439  120830 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:00.869049  120830 main.go:141] libmachine: Using API Version  1
I0819 11:29:00.869082  120830 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:00.869402  120830 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:00.869589  120830 main.go:141] libmachine: (functional-881155) Calling .GetState
I0819 11:29:00.871563  120830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:00.871624  120830 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:00.887865  120830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43875
I0819 11:29:00.888377  120830 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:00.888886  120830 main.go:141] libmachine: Using API Version  1
I0819 11:29:00.888915  120830 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:00.889240  120830 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:00.889415  120830 main.go:141] libmachine: (functional-881155) Calling .DriverName
I0819 11:29:00.889616  120830 ssh_runner.go:195] Run: systemctl --version
I0819 11:29:00.889649  120830 main.go:141] libmachine: (functional-881155) Calling .GetSSHHostname
I0819 11:29:00.892692  120830 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:00.893155  120830 main.go:141] libmachine: (functional-881155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:6f:8b", ip: ""} in network mk-functional-881155: {Iface:virbr1 ExpiryTime:2024-08-19 12:26:24 +0000 UTC Type:0 Mac:52:54:00:23:6f:8b Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:functional-881155 Clientid:01:52:54:00:23:6f:8b}
I0819 11:29:00.893191  120830 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined IP address 192.168.39.35 and MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:00.893338  120830 main.go:141] libmachine: (functional-881155) Calling .GetSSHPort
I0819 11:29:00.893524  120830 main.go:141] libmachine: (functional-881155) Calling .GetSSHKeyPath
I0819 11:29:00.893715  120830 main.go:141] libmachine: (functional-881155) Calling .GetSSHUsername
I0819 11:29:00.893913  120830 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/functional-881155/id_rsa Username:docker}
I0819 11:29:01.009071  120830 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 11:29:01.058132  120830 main.go:141] libmachine: Making call to close driver server
I0819 11:29:01.058147  120830 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:01.058453  120830 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:01.058481  120830 main.go:141] libmachine: (functional-881155) DBG | Closing plugin on server side
I0819 11:29:01.058484  120830 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 11:29:01.058531  120830 main.go:141] libmachine: Making call to close driver server
I0819 11:29:01.058545  120830 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:01.058800  120830 main.go:141] libmachine: (functional-881155) DBG | Closing plugin on server side
I0819 11:29:01.058820  120830 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:01.058834  120830 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 ssh pgrep buildkitd: exit status 1 (237.012664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image build -t localhost/my-image:functional-881155 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 image build -t localhost/my-image:functional-881155 testdata/build --alsologtostderr: (3.672028578s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-881155 image build -t localhost/my-image:functional-881155 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 51d65778241
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-881155
--> 7025bdb6435
Successfully tagged localhost/my-image:functional-881155
7025bdb643559790b1126dbd8a5c71c33ad583971f2324c5e9596f03b7443627
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-881155 image build -t localhost/my-image:functional-881155 testdata/build --alsologtostderr:
I0819 11:29:01.348559  120882 out.go:345] Setting OutFile to fd 1 ...
I0819 11:29:01.348872  120882 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:01.348883  120882 out.go:358] Setting ErrFile to fd 2...
I0819 11:29:01.348888  120882 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:29:01.349080  120882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
I0819 11:29:01.349684  120882 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:01.350246  120882 config.go:182] Loaded profile config "functional-881155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 11:29:01.350632  120882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:01.350676  120882 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:01.366831  120882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
I0819 11:29:01.367323  120882 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:01.367923  120882 main.go:141] libmachine: Using API Version  1
I0819 11:29:01.367953  120882 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:01.368320  120882 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:01.368533  120882 main.go:141] libmachine: (functional-881155) Calling .GetState
I0819 11:29:01.370438  120882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 11:29:01.370479  120882 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 11:29:01.386152  120882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
I0819 11:29:01.386788  120882 main.go:141] libmachine: () Calling .GetVersion
I0819 11:29:01.387305  120882 main.go:141] libmachine: Using API Version  1
I0819 11:29:01.387331  120882 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 11:29:01.387655  120882 main.go:141] libmachine: () Calling .GetMachineName
I0819 11:29:01.387842  120882 main.go:141] libmachine: (functional-881155) Calling .DriverName
I0819 11:29:01.388060  120882 ssh_runner.go:195] Run: systemctl --version
I0819 11:29:01.388101  120882 main.go:141] libmachine: (functional-881155) Calling .GetSSHHostname
I0819 11:29:01.390805  120882 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:01.391257  120882 main.go:141] libmachine: (functional-881155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:6f:8b", ip: ""} in network mk-functional-881155: {Iface:virbr1 ExpiryTime:2024-08-19 12:26:24 +0000 UTC Type:0 Mac:52:54:00:23:6f:8b Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:functional-881155 Clientid:01:52:54:00:23:6f:8b}
I0819 11:29:01.391303  120882 main.go:141] libmachine: (functional-881155) DBG | domain functional-881155 has defined IP address 192.168.39.35 and MAC address 52:54:00:23:6f:8b in network mk-functional-881155
I0819 11:29:01.391386  120882 main.go:141] libmachine: (functional-881155) Calling .GetSSHPort
I0819 11:29:01.391584  120882 main.go:141] libmachine: (functional-881155) Calling .GetSSHKeyPath
I0819 11:29:01.391775  120882 main.go:141] libmachine: (functional-881155) Calling .GetSSHUsername
I0819 11:29:01.391952  120882 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/functional-881155/id_rsa Username:docker}
I0819 11:29:01.511469  120882 build_images.go:161] Building image from path: /tmp/build.3914014237.tar
I0819 11:29:01.511564  120882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 11:29:01.525482  120882 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3914014237.tar
I0819 11:29:01.530524  120882 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3914014237.tar: stat -c "%s %y" /var/lib/minikube/build/build.3914014237.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3914014237.tar': No such file or directory
I0819 11:29:01.530571  120882 ssh_runner.go:362] scp /tmp/build.3914014237.tar --> /var/lib/minikube/build/build.3914014237.tar (3072 bytes)
I0819 11:29:01.576259  120882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3914014237
I0819 11:29:01.608417  120882 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3914014237 -xf /var/lib/minikube/build/build.3914014237.tar
I0819 11:29:01.644537  120882 crio.go:315] Building image: /var/lib/minikube/build/build.3914014237
I0819 11:29:01.644641  120882 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-881155 /var/lib/minikube/build/build.3914014237 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0819 11:29:04.940469  120882 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-881155 /var/lib/minikube/build/build.3914014237 --cgroup-manager=cgroupfs: (3.295786136s)
I0819 11:29:04.940575  120882 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3914014237
I0819 11:29:04.958071  120882 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3914014237.tar
I0819 11:29:04.968466  120882 build_images.go:217] Built localhost/my-image:functional-881155 from /tmp/build.3914014237.tar
I0819 11:29:04.968506  120882 build_images.go:133] succeeded building to: functional-881155
I0819 11:29:04.968512  120882 build_images.go:134] failed building to: 
I0819 11:29:04.968541  120882 main.go:141] libmachine: Making call to close driver server
I0819 11:29:04.968552  120882 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:04.968877  120882 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:04.968895  120882 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 11:29:04.968904  120882 main.go:141] libmachine: Making call to close driver server
I0819 11:29:04.968912  120882 main.go:141] libmachine: (functional-881155) Calling .Close
I0819 11:29:04.969160  120882 main.go:141] libmachine: (functional-881155) DBG | Closing plugin on server side
I0819 11:29:04.969183  120882 main.go:141] libmachine: Successfully made call to close driver server
I0819 11:29:04.969203  120882 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls
2024/08/19 11:29:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.564137574s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-881155
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-881155 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-881155 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [516abe4c-87bf-46dd-a5b0-f37ccf35fb6d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [516abe4c-87bf-46dd-a5b0-f37ccf35fb6d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004477209s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image load --daemon kicbase/echo-server:functional-881155 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 image load --daemon kicbase/echo-server:functional-881155 --alsologtostderr: (1.090813735s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image load --daemon kicbase/echo-server:functional-881155 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 image load --daemon kicbase/echo-server:functional-881155 --alsologtostderr: (1.228420249s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-881155
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image load --daemon kicbase/echo-server:functional-881155 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 image load --daemon kicbase/echo-server:functional-881155 --alsologtostderr: (2.032592393s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image save kicbase/echo-server:functional-881155 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image rm kicbase/echo-server:functional-881155 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-881155
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 image save --daemon kicbase/echo-server:functional-881155 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-881155
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-881155 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-881155 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-cd68w" [8e205e60-10e1-4e39-952c-e03ab0b5d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-cd68w" [8e205e60-10e1-4e39-952c-e03ab0b5d8ee] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.006484686s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-881155 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.10.140 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-881155 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 119333: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "222.073317ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.55423ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "276.48387ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.096064ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdany-port3306283240/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724066927348609129" to /tmp/TestFunctionalparallelMountCmdany-port3306283240/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724066927348609129" to /tmp/TestFunctionalparallelMountCmdany-port3306283240/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724066927348609129" to /tmp/TestFunctionalparallelMountCmdany-port3306283240/001/test-1724066927348609129
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.022226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 11:28 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 11:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 11:28 test-1724066927348609129
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh cat /mount-9p/test-1724066927348609129
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-881155 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bdb1f274-d1ae-47b0-9220-77a59aa19499] Pending
helpers_test.go:344: "busybox-mount" [bdb1f274-d1ae-47b0-9220-77a59aa19499] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bdb1f274-d1ae-47b0-9220-77a59aa19499] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bdb1f274-d1ae-47b0-9220-77a59aa19499] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004972405s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-881155 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdany-port3306283240/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 service list: (1.223981652s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-881155 service list -o json: (1.228455189s)
functional_test.go:1494: Took "1.228591628s" to run "out/minikube-linux-amd64 -p functional-881155 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.35:30314
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.35:30314
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdspecific-port3565065671/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (212.147338ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdspecific-port3565065671/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 ssh "sudo umount -f /mount-9p": exit status 1 (225.730069ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-881155 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdspecific-port3565065671/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2926930443/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2926930443/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2926930443/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T" /mount1: exit status 1 (284.046725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-881155 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2926930443/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2926930443/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-881155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2926930443/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-881155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-881155
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-881155
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-881155
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-503856 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-503856 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m9.999446842s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (190.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-503856 -- rollout status deployment/busybox: (3.749274479s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-7wpbx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nbmlj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nxhq6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-7wpbx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nbmlj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nxhq6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-7wpbx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nbmlj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nxhq6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-7wpbx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-7wpbx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nbmlj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nbmlj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nxhq6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-503856 -- exec busybox-7dff88458-nxhq6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-503856 -v=7 --alsologtostderr
E0819 11:33:35.348003  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:33:35.355262  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:33:35.366790  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:33:35.388259  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:33:35.429667  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:33:35.511181  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:33:35.673091  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:33:35.994563  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-503856 -v=7 --alsologtostderr: (53.709671028s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
E0819 11:33:36.635903  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-503856 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status --output json -v=7 --alsologtostderr
E0819 11:33:37.917935  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp testdata/cp-test.txt ha-503856:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856:/home/docker/cp-test.txt ha-503856-m02:/home/docker/cp-test_ha-503856_ha-503856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m02 "sudo cat /home/docker/cp-test_ha-503856_ha-503856-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856:/home/docker/cp-test.txt ha-503856-m03:/home/docker/cp-test_ha-503856_ha-503856-m03.txt
E0819 11:33:40.479799  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m03 "sudo cat /home/docker/cp-test_ha-503856_ha-503856-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856:/home/docker/cp-test.txt ha-503856-m04:/home/docker/cp-test_ha-503856_ha-503856-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m04 "sudo cat /home/docker/cp-test_ha-503856_ha-503856-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp testdata/cp-test.txt ha-503856-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m02:/home/docker/cp-test.txt ha-503856:/home/docker/cp-test_ha-503856-m02_ha-503856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856 "sudo cat /home/docker/cp-test_ha-503856-m02_ha-503856.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m02:/home/docker/cp-test.txt ha-503856-m03:/home/docker/cp-test_ha-503856-m02_ha-503856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m03 "sudo cat /home/docker/cp-test_ha-503856-m02_ha-503856-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m02:/home/docker/cp-test.txt ha-503856-m04:/home/docker/cp-test_ha-503856-m02_ha-503856-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m04 "sudo cat /home/docker/cp-test_ha-503856-m02_ha-503856-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp testdata/cp-test.txt ha-503856-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt ha-503856:/home/docker/cp-test_ha-503856-m03_ha-503856.txt
E0819 11:33:45.601233  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856 "sudo cat /home/docker/cp-test_ha-503856-m03_ha-503856.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt ha-503856-m02:/home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m02 "sudo cat /home/docker/cp-test_ha-503856-m03_ha-503856-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m03:/home/docker/cp-test.txt ha-503856-m04:/home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m04 "sudo cat /home/docker/cp-test_ha-503856-m03_ha-503856-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp testdata/cp-test.txt ha-503856-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4008298079/001/cp-test_ha-503856-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt ha-503856:/home/docker/cp-test_ha-503856-m04_ha-503856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856 "sudo cat /home/docker/cp-test_ha-503856-m04_ha-503856.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt ha-503856-m02:/home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m02 "sudo cat /home/docker/cp-test_ha-503856-m04_ha-503856-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 cp ha-503856-m04:/home/docker/cp-test.txt ha-503856-m03:/home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 ssh -n ha-503856-m03 "sudo cat /home/docker/cp-test_ha-503856-m04_ha-503856-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.477605847s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-503856 node delete m03 -v=7 --alsologtostderr: (15.832667469s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (347.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-503856 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 11:48:35.349029  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:58.413540  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-503856 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m46.883169923s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (347.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-503856 --control-plane -v=7 --alsologtostderr
E0819 11:53:35.347660  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-503856 --control-plane -v=7 --alsologtostderr: (1m19.39624128s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-503856 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-603036 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-603036 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.998535761s)
--- PASS: TestJSONOutput/start/Command (56.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-603036 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-603036 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-603036 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-603036 --output=json --user=testUser: (6.667546586s)
--- PASS: TestJSONOutput/stop/Command (6.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-050945 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-050945 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.512ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6ad8689e-835c-4380-9268-d54e553b7ffa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-050945] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52e6ede6-61d4-4ce2-b6bf-21db352c4182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19476"}}
	{"specversion":"1.0","id":"f965ce25-33f4-4174-bd20-b8c4fedfa970","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"63b7edf9-8dec-436a-90ff-91dc94c6c245","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig"}}
	{"specversion":"1.0","id":"90e901c8-244b-4c90-b3f7-39a63af242df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube"}}
	{"specversion":"1.0","id":"60384c77-3630-4ed8-be86-ea6fb800bf04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2e7a26ff-be4e-4844-b9ff-9bc8ee33a5d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e8f4baa-7020-4643-960b-a0a57cdf19c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-050945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-050945
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-495529 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-495529 --driver=kvm2  --container-runtime=crio: (44.18466527s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-498425 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-498425 --driver=kvm2  --container-runtime=crio: (43.714774426s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-495529
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-498425
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-498425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-498425
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-498425: (1.023219142s)
helpers_test.go:175: Cleaning up "first-495529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-495529
--- PASS: TestMinikubeProfile (90.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-231976 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-231976 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.152564737s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-231976 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-231976 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-245728 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-245728 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.858137557s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245728 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245728 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-231976 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245728 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245728 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-245728
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-245728: (1.272393088s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-245728
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-245728: (22.407472425s)
--- PASS: TestMountStart/serial/RestartStopped (23.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245728 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-245728 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320821 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 11:58:35.348194  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-320821 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.165624565s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-320821 -- rollout status deployment/busybox: (3.131324152s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-88ptz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-kjbkv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-88ptz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-kjbkv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-88ptz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-kjbkv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-88ptz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-88ptz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-kjbkv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320821 -- exec busybox-7dff88458-kjbkv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-320821 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-320821 -v 3 --alsologtostderr: (49.054803736s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-320821 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp testdata/cp-test.txt multinode-320821:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile690601289/001/cp-test_multinode-320821.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821:/home/docker/cp-test.txt multinode-320821-m02:/home/docker/cp-test_multinode-320821_multinode-320821-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m02 "sudo cat /home/docker/cp-test_multinode-320821_multinode-320821-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821:/home/docker/cp-test.txt multinode-320821-m03:/home/docker/cp-test_multinode-320821_multinode-320821-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m03 "sudo cat /home/docker/cp-test_multinode-320821_multinode-320821-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp testdata/cp-test.txt multinode-320821-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile690601289/001/cp-test_multinode-320821-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821-m02:/home/docker/cp-test.txt multinode-320821:/home/docker/cp-test_multinode-320821-m02_multinode-320821.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821 "sudo cat /home/docker/cp-test_multinode-320821-m02_multinode-320821.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821-m02:/home/docker/cp-test.txt multinode-320821-m03:/home/docker/cp-test_multinode-320821-m02_multinode-320821-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m03 "sudo cat /home/docker/cp-test_multinode-320821-m02_multinode-320821-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp testdata/cp-test.txt multinode-320821-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile690601289/001/cp-test_multinode-320821-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt multinode-320821:/home/docker/cp-test_multinode-320821-m03_multinode-320821.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821 "sudo cat /home/docker/cp-test_multinode-320821-m03_multinode-320821.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 cp multinode-320821-m03:/home/docker/cp-test.txt multinode-320821-m02:/home/docker/cp-test_multinode-320821-m03_multinode-320821-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 ssh -n multinode-320821-m02 "sudo cat /home/docker/cp-test_multinode-320821-m03_multinode-320821-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-320821 node stop m03: (1.390758361s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-320821 status: exit status 7 (418.975854ms)

                                                
                                                
-- stdout --
	multinode-320821
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-320821-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-320821-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-320821 status --alsologtostderr: exit status 7 (424.797766ms)

                                                
                                                
-- stdout --
	multinode-320821
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-320821-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-320821-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:00:46.151849  138528 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:00:46.152375  138528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:00:46.152396  138528 out.go:358] Setting ErrFile to fd 2...
	I0819 12:00:46.152404  138528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:00:46.152870  138528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-99410/.minikube/bin
	I0819 12:00:46.153421  138528 out.go:352] Setting JSON to false
	I0819 12:00:46.153458  138528 mustload.go:65] Loading cluster: multinode-320821
	I0819 12:00:46.153577  138528 notify.go:220] Checking for updates...
	I0819 12:00:46.154070  138528 config.go:182] Loaded profile config "multinode-320821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 12:00:46.154093  138528 status.go:255] checking status of multinode-320821 ...
	I0819 12:00:46.154594  138528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:00:46.154647  138528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:00:46.171066  138528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0819 12:00:46.171508  138528 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:00:46.172213  138528 main.go:141] libmachine: Using API Version  1
	I0819 12:00:46.172234  138528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:00:46.172679  138528 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:00:46.172939  138528 main.go:141] libmachine: (multinode-320821) Calling .GetState
	I0819 12:00:46.174765  138528 status.go:330] multinode-320821 host status = "Running" (err=<nil>)
	I0819 12:00:46.174785  138528 host.go:66] Checking if "multinode-320821" exists ...
	I0819 12:00:46.175090  138528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:00:46.175131  138528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:00:46.191093  138528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0819 12:00:46.191690  138528 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:00:46.192230  138528 main.go:141] libmachine: Using API Version  1
	I0819 12:00:46.192250  138528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:00:46.192530  138528 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:00:46.192713  138528 main.go:141] libmachine: (multinode-320821) Calling .GetIP
	I0819 12:00:46.195290  138528 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:00:46.195844  138528 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:00:46.195880  138528 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:00:46.196045  138528 host.go:66] Checking if "multinode-320821" exists ...
	I0819 12:00:46.196365  138528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:00:46.196420  138528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:00:46.212658  138528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0819 12:00:46.213087  138528 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:00:46.213637  138528 main.go:141] libmachine: Using API Version  1
	I0819 12:00:46.213667  138528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:00:46.214082  138528 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:00:46.214301  138528 main.go:141] libmachine: (multinode-320821) Calling .DriverName
	I0819 12:00:46.214528  138528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:00:46.214562  138528 main.go:141] libmachine: (multinode-320821) Calling .GetSSHHostname
	I0819 12:00:46.217472  138528 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:00:46.217915  138528 main.go:141] libmachine: (multinode-320821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:94:68", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:58:06 +0000 UTC Type:0 Mac:52:54:00:cc:94:68 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-320821 Clientid:01:52:54:00:cc:94:68}
	I0819 12:00:46.217941  138528 main.go:141] libmachine: (multinode-320821) DBG | domain multinode-320821 has defined IP address 192.168.39.88 and MAC address 52:54:00:cc:94:68 in network mk-multinode-320821
	I0819 12:00:46.218098  138528 main.go:141] libmachine: (multinode-320821) Calling .GetSSHPort
	I0819 12:00:46.218303  138528 main.go:141] libmachine: (multinode-320821) Calling .GetSSHKeyPath
	I0819 12:00:46.218477  138528 main.go:141] libmachine: (multinode-320821) Calling .GetSSHUsername
	I0819 12:00:46.218637  138528 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821/id_rsa Username:docker}
	I0819 12:00:46.302922  138528 ssh_runner.go:195] Run: systemctl --version
	I0819 12:00:46.308842  138528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:00:46.323523  138528 kubeconfig.go:125] found "multinode-320821" server: "https://192.168.39.88:8443"
	I0819 12:00:46.323566  138528 api_server.go:166] Checking apiserver status ...
	I0819 12:00:46.323614  138528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:00:46.337440  138528 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1111/cgroup
	W0819 12:00:46.347275  138528 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1111/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:00:46.347329  138528 ssh_runner.go:195] Run: ls
	I0819 12:00:46.351620  138528 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8443/healthz ...
	I0819 12:00:46.355858  138528 api_server.go:279] https://192.168.39.88:8443/healthz returned 200:
	ok
	I0819 12:00:46.355893  138528 status.go:422] multinode-320821 apiserver status = Running (err=<nil>)
	I0819 12:00:46.355914  138528 status.go:257] multinode-320821 status: &{Name:multinode-320821 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:00:46.355968  138528 status.go:255] checking status of multinode-320821-m02 ...
	I0819 12:00:46.356271  138528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:00:46.356297  138528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:00:46.371993  138528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38523
	I0819 12:00:46.372418  138528 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:00:46.372928  138528 main.go:141] libmachine: Using API Version  1
	I0819 12:00:46.372950  138528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:00:46.373271  138528 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:00:46.373465  138528 main.go:141] libmachine: (multinode-320821-m02) Calling .GetState
	I0819 12:00:46.374918  138528 status.go:330] multinode-320821-m02 host status = "Running" (err=<nil>)
	I0819 12:00:46.374936  138528 host.go:66] Checking if "multinode-320821-m02" exists ...
	I0819 12:00:46.375228  138528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:00:46.375273  138528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:00:46.391144  138528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I0819 12:00:46.391596  138528 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:00:46.392083  138528 main.go:141] libmachine: Using API Version  1
	I0819 12:00:46.392102  138528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:00:46.392394  138528 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:00:46.392594  138528 main.go:141] libmachine: (multinode-320821-m02) Calling .GetIP
	I0819 12:00:46.395102  138528 main.go:141] libmachine: (multinode-320821-m02) DBG | domain multinode-320821-m02 has defined MAC address 52:54:00:c4:ed:6e in network mk-multinode-320821
	I0819 12:00:46.395503  138528 main.go:141] libmachine: (multinode-320821-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:ed:6e", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:59:07 +0000 UTC Type:0 Mac:52:54:00:c4:ed:6e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-320821-m02 Clientid:01:52:54:00:c4:ed:6e}
	I0819 12:00:46.395525  138528 main.go:141] libmachine: (multinode-320821-m02) DBG | domain multinode-320821-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:c4:ed:6e in network mk-multinode-320821
	I0819 12:00:46.395694  138528 host.go:66] Checking if "multinode-320821-m02" exists ...
	I0819 12:00:46.396137  138528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:00:46.396186  138528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:00:46.412865  138528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0819 12:00:46.413346  138528 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:00:46.413789  138528 main.go:141] libmachine: Using API Version  1
	I0819 12:00:46.413807  138528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:00:46.414177  138528 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:00:46.414363  138528 main.go:141] libmachine: (multinode-320821-m02) Calling .DriverName
	I0819 12:00:46.414555  138528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:00:46.414580  138528 main.go:141] libmachine: (multinode-320821-m02) Calling .GetSSHHostname
	I0819 12:00:46.417425  138528 main.go:141] libmachine: (multinode-320821-m02) DBG | domain multinode-320821-m02 has defined MAC address 52:54:00:c4:ed:6e in network mk-multinode-320821
	I0819 12:00:46.417834  138528 main.go:141] libmachine: (multinode-320821-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:ed:6e", ip: ""} in network mk-multinode-320821: {Iface:virbr1 ExpiryTime:2024-08-19 12:59:07 +0000 UTC Type:0 Mac:52:54:00:c4:ed:6e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-320821-m02 Clientid:01:52:54:00:c4:ed:6e}
	I0819 12:00:46.417863  138528 main.go:141] libmachine: (multinode-320821-m02) DBG | domain multinode-320821-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:c4:ed:6e in network mk-multinode-320821
	I0819 12:00:46.418018  138528 main.go:141] libmachine: (multinode-320821-m02) Calling .GetSSHPort
	I0819 12:00:46.418185  138528 main.go:141] libmachine: (multinode-320821-m02) Calling .GetSSHKeyPath
	I0819 12:00:46.418356  138528 main.go:141] libmachine: (multinode-320821-m02) Calling .GetSSHUsername
	I0819 12:00:46.418557  138528 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19476-99410/.minikube/machines/multinode-320821-m02/id_rsa Username:docker}
	I0819 12:00:46.498633  138528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:00:46.513088  138528 status.go:257] multinode-320821-m02 status: &{Name:multinode-320821-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:00:46.513126  138528 status.go:255] checking status of multinode-320821-m03 ...
	I0819 12:00:46.513418  138528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 12:00:46.513446  138528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 12:00:46.530038  138528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0819 12:00:46.530512  138528 main.go:141] libmachine: () Calling .GetVersion
	I0819 12:00:46.531007  138528 main.go:141] libmachine: Using API Version  1
	I0819 12:00:46.531031  138528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 12:00:46.531391  138528 main.go:141] libmachine: () Calling .GetMachineName
	I0819 12:00:46.531581  138528 main.go:141] libmachine: (multinode-320821-m03) Calling .GetState
	I0819 12:00:46.533197  138528 status.go:330] multinode-320821-m03 host status = "Stopped" (err=<nil>)
	I0819 12:00:46.533215  138528 status.go:343] host is not running, skipping remaining checks
	I0819 12:00:46.533224  138528 status.go:257] multinode-320821-m03 status: &{Name:multinode-320821-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-320821 node start m03 -v=7 --alsologtostderr: (38.498976674s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-320821 node delete m03: (1.584029645s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (187.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320821 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-320821 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m6.77476073s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320821 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (187.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-320821
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320821-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-320821-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.525668ms)

                                                
                                                
-- stdout --
	* [multinode-320821-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-320821-m02' is duplicated with machine name 'multinode-320821-m02' in profile 'multinode-320821'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320821-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-320821-m03 --driver=kvm2  --container-runtime=crio: (39.223473238s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-320821
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-320821: exit status 80 (208.193534ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-320821 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-320821-m03 already exists in multinode-320821-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-320821-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.34s)

                                                
                                    
x
+
TestScheduledStopUnix (114.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-181400 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-181400 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.192842749s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-181400 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-181400 -n scheduled-stop-181400
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-181400 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-181400 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-181400 -n scheduled-stop-181400
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-181400
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-181400 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-181400
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-181400: exit status 7 (66.628272ms)

                                                
                                                
-- stdout --
	scheduled-stop-181400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-181400 -n scheduled-stop-181400
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-181400 -n scheduled-stop-181400: exit status 7 (64.784874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-181400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-181400
--- PASS: TestScheduledStopUnix (114.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (197.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2912746160 start -p running-upgrade-357956 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0819 12:18:35.347935  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2912746160 start -p running-upgrade-357956 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m21.950528881s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-357956 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-357956 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.501536723s)
helpers_test.go:175: Cleaning up "running-upgrade-357956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-357956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-357956: (1.203654151s)
--- PASS: TestRunningBinaryUpgrade (197.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340370 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-340370 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (89.208126ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-340370] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-99410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-99410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340370 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-340370 --driver=kvm2  --container-runtime=crio: (1m33.113231122s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-340370 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340370 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-340370 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m6.30076645s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-340370 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-340370 status -o json: exit status 2 (264.552334ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-340370","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-340370
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-340370: (1.214890561s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (53.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340370 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-340370 --no-kubernetes --driver=kvm2  --container-runtime=crio: (53.953366925s)
--- PASS: TestNoKubernetes/serial/Start (53.95s)

                                                
                                    
x
+
TestPause/serial/Start (70.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-732494 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-732494 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m10.936069895s)
--- PASS: TestPause/serial/Start (70.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-340370 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-340370 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.578725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-340370
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-340370: (1.286603899s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340370 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-340370 --driver=kvm2  --container-runtime=crio: (1m2.681087497s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-340370 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-340370 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.093932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (134.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.625414190 start -p stopped-upgrade-111717 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0819 12:23:18.417988  106632 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-99410/.minikube/profiles/functional-881155/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.625414190 start -p stopped-upgrade-111717 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.099254546s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.625414190 -p stopped-upgrade-111717 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.625414190 -p stopped-upgrade-111717 stop: (1.412735661s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-111717 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-111717 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.01872753s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (134.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-111717
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    

Test skip (27/208)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard